{"id":1230,"date":"2022-10-27T23:01:16","date_gmt":"2022-10-27T15:01:16","guid":{"rendered":"https:\/\/qaqaq.top\/?p=1230"},"modified":"2022-11-27T12:39:48","modified_gmt":"2022-11-27T04:39:48","slug":"%e9%9b%86%e5%90%88-map%e6%8e%a5%e5%8f%a3%e5%8f%8a%e5%85%b6%e5%a4%9a%e4%b8%aa%e5%ae%9e%e7%8e%b0%e7%b1%bb%e7%9a%84%e5%af%b9%e6%af%94%e3%80%81map%e4%b8%ad%e5%ad%98%e5%82%a8%e7%9a%84key-value%e7%9a%84","status":"publish","type":"post","link":"https:\/\/qaqaq.top\/?p=1230","title":{"rendered":"\u96c6\u5408-Map\u63a5\u53e3\u53ca\u5176\u591a\u4e2a\u5b9e\u73b0\u7c7b\u7684\u5bf9\u6bd4\u3001Map\u4e2d\u5b58\u50a8\u7684key-value\u7684\u7279\u70b9\u3001HashMap\u5728JDK7\u4e2d\u7684\u5e95\u5c42\u5b9e\u73b0\u539f\u7406\u3001HashMap\u5728JDK8\u4e2d\u7684\u5e95\u5c42\u5b9e\u73b0\u539f\u7406\u3001HashMap\u5728JDK7\u4e2d\u7684\u6e90\u7801\u5206\u6790\u3001HashMap\u5728JDK8\u4e2d\u7684\u6e90\u7801\u5206\u6790\u3001LinkedHashMap\u7684\u5e95\u5c42\u5b9e\u73b0\u3001Map\u4e2d\u7684\u5e38\u7528\u65b9\u6cd51\u3001Map\u4e2d\u7684\u5e38\u7528\u65b9\u6cd52"},"content":{"rendered":"\n<pre class=\"wp-block-code\"><code>package top.qaqaq.java.P548;\r\n\r\nimport org.junit.jupiter.api.Test;\r\n\r\nimport java.util.*;\r\n\r\n\/**\r\n * |----Map:\u53cc\u5217\u6570\u636e\uff0c\u5b58\u50a8key-value\u5bf9\u7684\u6570\u636e ---\u7c7b\u4f3c\u4e8e\u9ad8\u4e2d\u7684\u51fd\u6570\uff1ay =f(x)\r\n *      |----HashMap:\u4f5c\u4e3aMap\u7684\u4e3b\u8981\u5b9e\u73b0\u7c7b\uff1b\u7ebf\u7a0b\u4e0d\u5b89\u5168\u7684\uff0c\u6548\u7387\u9ad8\uff1b\u5b58\u50a8null\u7684key\u548cvalue\r\n *          |----LinkedHashMap:\u4fdd\u8bc1\u5728\u904d\u5386map\u5143\u7d20\u65f6\uff0c\u53ef\u4ee5\u6309\u7167\u6dfb\u52a0\u7684\u987a\u5e8f\u5b9e\u73b0\u904d\u5386\u3002\r\n *                  \u539f\u56e0\uff1a\u5728\u539f\u6709\u7684HashMap\u5e95\u5c42\u7ed3\u6784\u57fa\u7840\u4e0a\uff0c\u6dfb\u52a0\u4e86\u4e00\u5bf9\u6307\u9488\uff0c\u6307\u5411\u524d\u4e00\u4e2a\u548c\u540e\u4e00\u4e2a\u5143\u7d20\u3002\r\n *                  \u5bf9\u4e8e\u9891\u7e41\u7684\u904d\u5386\u64cd\u4f5c\uff0c\u6b64\u7c7b\u6267\u884c\u6548\u7387\u9ad8\u4e8eHashMap\u3002\r\n *      |----TreeMap:\u4fdd\u8bc1\u6309\u7167\u6dfb\u52a0\u7684key-value\u5bf9\u8fdb\u884c\u6392\u5e8f\uff0c\u5b9e\u73b0\u6392\u5e8f\u904d\u5386\u3002\u6b64\u65f6\u8003\u8651key\u7684\u81ea\u7136\u6392\u5e8f\u6216\u5b9a\u5236\u6392\u5e8f\r\n *                      \u5e95\u5c42\u4f7f\u7528\u7ea2\u9ed1\u6811\r\n *      |----Hashtable:\u4f5c\u4e3a\u53e4\u8001\u7684\u5b9e\u73b0\u7c7b\uff1b\u7ebf\u7a0b\u5b89\u5168\u7684\uff0c\u6548\u7387\u4f4e\uff1b\u4e0d\u80fd\u5b58\u50a8null\u7684key\u548cvalue\r\n *          |----Properties:\u5e38\u7528\u6765\u5904\u7406\u914d\u7f6e\u6587\u4ef6\u3002key\u548cvalue\u90fd\u662fString\u7c7b\u578b\r\n *\r\n *\r\n *      HashMap\u7684\u5e95\u5c42\uff1a\u6570\u7ec4+\u94fe\u8868    (jdk 7\u53ca\u4e4b\u524d)\r\n *                     \u6570\u636e+\u94fe\u8868+\u7ea2\u9ed1\u6811    (jdk 8)\r\n *\r\n * \u9762\u8bd5\u9898\uff1a\r\n * 1. HashMap\u7684\u5e95\u5c42\u5b9e\u73b0\u539f\u7406\uff1f\r\n * 2. HashMap \u548c Hashtable\u7684\u5f02\u540c\uff1f\r\n * 3. CurrentHashMap \u4e0e Hashtable\u7684\u5f02\u540c\uff1f\uff08\u6682\u65f6\u4e0d\u8bb2\uff09\r\n *\r\n * \u4e8c\u3001Map\u7ed3\u6784\u7684\u7406\u89e3\uff1a\r\n *      Map\u4e2d\u7684key\uff1a\u65e0\u5e8f\u7684\u3001\u4e0d\u53ef\u91cd\u590d\u7684\uff0c\u4f7f\u7528Set\u5b58\u50a8\u6240\u6709\u7684key ---> key\u6240\u5728\u7684\u7c7b\u8981\u91cd\u5199equals()\u548chashCode() \uff08\u4ee5HashMap\u4e3a\u4f8b\uff09\r\n *      Map\u4e2d\u7684value\uff1a\u65e0\u5e8f\u7684\u3001\u53ef\u91cd\u590d\u7684\uff0c\u4f7f\u7528Collection\u5b58\u50a8\u6240\u6709\u7684value --->value\u6240\u5728\u7684\u7c7b\u8981\u91cd\u5199equals()\r\n *      \u4e00\u4e2a\u952e\u503c\u5bf9\uff1akey-value\u6784\u6210\u4e86\u4e00\u4e2aEntry\u5bf9\u8c61\u3002\r\n *      Map\u4e2d\u7684entry\uff1a\u65e0\u5e8f\u7684\u3001\u4e0d\u53ef\u91cd\u590d\u7684\uff0c\u4f7f\u7528Set\u5b58\u50a8\u6240\u6709\u7684entry\r\n *\r\n * \u4e09\u3001HashMap\u7684\u5e95\u5c42\u5b9e\u73b0\u539f\u7406\uff1f\u4ee5jdk7\u4e3a\u4f8b\u8bf4\u660e\uff1a\r\n *      HashMap map = new HashMap():\r\n *      \u5728\u5b9e\u4f8b\u5316\u4ee5\u540e\uff0c\u5e95\u5c42\u521b\u5efa\u4e86\u957f\u5ea6\u662f16\u7684\u4e00\u7ef4\u6570\u7ec4Entry&#91;] table\u3002\r\n        ...\u53ef\u80fd\u5df2\u7ecf\u6267\u884c\u8fc7\u591a\u6b21put...\r\n *      map.put(key1,value1):\r\n *      \u9996\u5148\uff0c\u8c03\u7528key1\u6240\u5728\u7c7b\u7684hashCode()\u8ba1\u7b97key1\u54c8\u5e0c\u503c\uff0c\u6b64\u54c8\u5e0c\u503c\u7ecf\u8fc7\u67d0\u79cd\u7b97\u6cd5\u8ba1\u7b97\u4ee5\u540e\uff0c\u5f97\u5230\u5728entry\u6570\u7ec4\u4e2d\u7684\u5b58\u653e\u4f4d\u7f6e\u3002\r\n *      \u5982\u679c\u6b64\u4f4d\u7f6e\u4e0a\u7684\u6570\u636e\u4e3a\u7a7a\uff0c\u6b64\u65f6\u7684key1-value1\u6dfb\u52a0\u6210\u529f\u3002 ----\u60c5\u51b51\r\n *      \u5982\u679c\u6b64\u4f4d\u7f6e\u4e0a\u7684\u6570\u636e\u4e0d\u4e3a\u7a7a\uff0c\uff08\u610f\u5473\u7740\u6b64\u4f4d\u7f6e\u4e0a\u5b58\u5728\u4e00\u4e2a\u6216\u591a\u4e2a\u6570\u636e\uff08\u4ee5\u94fe\u8868\u5f62\u5f0f\u5b58\u5728\uff09\uff09\uff0c\u6bd4\u8f83key1\u548c\u5df2\u7ecf\u5b58\u5728\u7684\u4e00\u4e2a\u6216\u591a\u4e2a\u6570\u636e\r\n *      \u7684\u54c8\u5e0c\u503c\uff1a\r\n *              \u5982\u679ckey1\u7684\u54c8\u5e0c\u503c\u4e0e\u5df2\u7ecf\u5b58\u5728\u7684\u6570\u636e\u7684\u54c8\u5e0c\u503c\u90fd\u4e0d\u76f8\u540c\uff0c\u6b64\u65f6key1-value1\u6dfb\u52a0\u6210\u529f\u3002   ----\u60c5\u51b52\r\n *              \u5982\u679ckey1\u7684\u54c8\u5e0c\u503c\u548c\u5df2\u7ecf\u5b58\u5728\u7684\u67d0\u4e00\u4e2a\u6570\u636e\uff08key2 - value2\uff09\u7684\u54c8\u5e0c\u503c\u76f8\u540c\uff0c\u7ee7\u7eed\u6bd4\u8f83\uff1a\u8c03\u7528key1\u6240\u5728\u7c7b\u7684equals(key2)\u65b9\u6cd5\uff0c\u6bd4\u8f83\uff1a\r\n *                      \u5982\u679cequals()\u8fd4\u56defalse\uff1a\u6b64\u65f6key1-value1\u6dfb\u52a0\u6210\u529f\u3002    ----\u60c5\u51b53\r\n *                      \u5982\u679cequals()\u8fd4\u56detrue\uff1a\u4f7f\u7528value1\u66ff\u6362value2\u3002\r\n *\r\n *          \u8865\u5145\uff1a\u5173\u4e8e\u60c5\u51b52\u548c\u60c5\u51b53\uff1a\u6b64\u65f6key1-value1\u548c\u539f\u6765\u7684\u6570\u636e\u4ee5\u94fe\u8868\u7684\u65b9\u5f0f\u5b58\u50a8\u3002\r\n *\r\n *          \u5728\u4e0d\u65ad\u7684\u6dfb\u52a0\u8fc7\u7a0b\u4e2d\uff0c\u4f1a\u6d89\u53ca\u5230\u6269\u5bb9\u95ee\u9898\uff0c\u5f53\u8d85\u51fa\u4e34\u754c\u503c\uff08\u4e14\u8981\u5b58\u653e\u7684\u4f4d\u7f6e\u975e\u7a7a\uff09\u65f6\uff0c\u9ed8\u8ba4\u7684\u6269\u5bb9\u65b9\u5f0f\uff1a\u6269\u5bb9\u4e3a\u539f\u6765\u5bb9\u91cf\u76842\u500d\uff0c\u5e76\u5c06\u539f\u6709\u7684\u6570\u636e\u590d\u5236\u8fc7\u6765\u3002\r\n *\r\n *          jdk8 \u76f8\u8f83\u4e8ejdk7\u5728\u5e95\u5c42\u5b9e\u73b0\u65b9\u9762\u7684\u4e0d\u540c\uff1a\r\n *          1. new HashMap():\u5e95\u5c42\u6ca1\u6709\u521b\u5efa\u4e00\u4e2a\u957f\u5ea6\u4e3a16\u7684\u6570\u7ec4\r\n *          2. jdk 8\u5e95\u5c42\u7684\u6570\u7ec4\u662f\uff1aNode&#91;],\u800c\u975eEntry&#91;]\r\n *          3. \u9996\u6b21\u8c03\u7528put()\u65b9\u6cd5\u65f6\uff0c\u5e95\u5c42\u521b\u5efa\u957f\u5ea6\u4e3a16\u7684\u6570\u7ec4\r\n *          4. jdk7\u5e95\u5c42\u7ed3\u6784\u53ea\u6709\uff1a\u6570\u7ec4+\u94fe\u8868\u3002jdk8\u4e2d\u5e95\u5c42\u7ed3\u6784\uff1a\u6570\u7ec4+\u94fe\u8868+\u7ea2\u9ed1\u6811\u3002\r\n *              \u5f53\u6570\u7ec4\u7684\u67d0\u4e00\u4e2a\u7d22\u5f15\u4f4d\u7f6e\u4e0a\u7684\u5143\u7d20\u4ee5\u94fe\u8868\u5f62\u5f0f\u5b58\u5728\u7684\u6570\u636e\u4e2a\u6570 > 8 \u4e14\u5f53\u524d\u6570\u7ec4\u7684\u957f\u5ea6 > 64\u65f6\uff0c\r\n *              \u6b64\u65f6\u6b64\u7d22\u5f15\u4f4d\u7f6e\u4e0a\u7684\u6240\u6709\u6570\u636e\u6539\u4e3a\u4f7f\u7528\u7ea2\u9ed1\u6811\u5b58\u50a8\u3002\r\n *\r\n *          DEFAULT_INITIAL_CAPACITY : HashMap\u7684\u9ed8\u8ba4\u5bb9\u91cf\uff0c16\r\n *          DEFAULT_LOAD_FACTOR\uff1aHashMap\u7684\u9ed8\u8ba4\u52a0\u8f7d\u56e0\u5b50\uff1a0.75\r\n *          threshold\uff1a\u6269\u5bb9\u7684\u4e34\u754c\u503c\uff0c=\u5bb9\u91cf*\u586b\u5145\u56e0\u5b50\uff1a16 * 0.75 => 12\r\n *          TREEIFY_THRESHOLD\uff1aBucket\u4e2d\u94fe\u8868\u957f\u5ea6\u5927\u4e8e\u8be5\u9ed8\u8ba4\u503c\uff0c\u8f6c\u5316\u4e3a\u7ea2\u9ed1\u6811\uff1a8\r\n *          MIN_TREEIFY_CAPACITY\uff1a\u6876\u4e2d\u7684Node\u88ab\u6811\u5316\u65f6\u6700\u5c0f\u7684hash\u8868\u5bb9\u91cf\uff1a64\r\n *\r\n *  \u56db\u3001LinkedHashMap\u7684\u5e95\u5c42\u5b9e\u73b0\u539f\u7406\uff08\u4e86\u89e3\uff09\r\n *      \u6e90\u7801\u4e2d\uff1a\r\n     *     static class Entry&lt;K,V> extends HashMap.Node&lt;K,V> {\r\n     *         Entry&lt;K,V> before, after;\/\/\u80fd\u591f\u8bb0\u5f55\u6dfb\u52a0\u7684\u5143\u7d20\u7684\u5148\u540e\u987a\u5e8f\r\n     *         Entry(int hash, K key, V value, Node&lt;K,V> next) {\r\n     *             super(hash, key, value, next);\r\n     *         }\r\n     *     }\r\n *\r\n *\r\n *  \u4e94\u3001Map\u4e2d\u5b9a\u4e49\u7684\u65b9\u6cd5\uff1a\r\n *  \u6dfb\u52a0\u3001\u5220\u9664\u3001\u4fee\u6539\u64cd\u4f5c\uff1a\r\n * \uf0d8 Object put(Object key,Object value)\uff1a\u5c06\u6307\u5b9akey-value\u6dfb\u52a0\u5230(\u6216\u4fee\u6539)\u5f53\u524dmap\u5bf9\u8c61\u4e2d\r\n * \uf0d8 void putAll(Map m):\u5c06m\u4e2d\u7684\u6240\u6709key-value\u5bf9\u5b58\u653e\u5230\u5f53\u524dmap\u4e2d\r\n * \uf0d8 Object remove(Object key)\uff1a\u79fb\u9664\u6307\u5b9akey\u7684key-value\u5bf9\uff0c\u5e76\u8fd4\u56devalue\r\n * \uf0d8 void clear()\uff1a\u6e05\u7a7a\u5f53\u524dmap\u4e2d\u7684\u6240\u6709\u6570\u636e\r\n *  \u5143\u7d20\u67e5\u8be2\u7684\u64cd\u4f5c\uff1a\r\n * \uf0d8 Object get(Object key)\uff1a\u83b7\u53d6\u6307\u5b9akey\u5bf9\u5e94\u7684value\r\n * \uf0d8 boolean containsKey(Object key)\uff1a\u662f\u5426\u5305\u542b\u6307\u5b9a\u7684key\r\n * \uf0d8 boolean containsValue(Object value)\uff1a\u662f\u5426\u5305\u542b\u6307\u5b9a\u7684value\r\n * \uf0d8 int size()\uff1a\u8fd4\u56demap\u4e2dkey-value\u5bf9\u7684\u4e2a\u6570\r\n * \uf0d8 boolean isEmpty()\uff1a\u5224\u65ad\u5f53\u524dmap\u662f\u5426\u4e3a\u7a7a\r\n * \uf0d8 boolean equals(Object obj)\uff1a\u5224\u65ad\u5f53\u524dmap\u548c\u53c2\u6570\u5bf9\u8c61obj\u662f\u5426\u76f8\u7b49\r\n *  \u5143\u89c6\u56fe\u64cd\u4f5c\u7684\u65b9\u6cd5\uff1a\r\n * \uf0d8 Set keySet()\uff1a\u8fd4\u56de\u6240\u6709key\u6784\u6210\u7684Set\u96c6\u5408\r\n * \uf0d8 Collection values()\uff1a\u8fd4\u56de\u6240\u6709value\u6784\u6210\u7684Collection\u96c6\u5408\r\n * \uf0d8 Set entrySet()\uff1a\u8fd4\u56de\u6240\u6709key-value\u5bf9\u6784\u6210\u7684Set\u96c6\u5408\r\n *\r\n * \u603b\u7ed3\uff1a\u5e38\u7528\u65b9\u6cd5\uff1a\r\n * \u6dfb\u52a0\uff1aput(Object key, Object value)\r\n * \u5220\u9664\uff1aremove(Object key)\r\n * \u4fee\u6539\uff1aput(Object key, Object value)\r\n * \u67e5\u8be2\uff1aget(Object key)\r\n * \u957f\u5ea6\uff1asize()\r\n * \u904d\u5386\uff1akeySet() \/ values() \/ entrySet()\r\n *\r\n * @author RichieZhang\r\n * @create 2022-10-27 \u4e0b\u5348 4:47\r\n *\/\r\npublic class MapTest {\r\n\r\n    \/*\r\n     *  \u5143\u89c6\u56fe\u64cd\u4f5c\u7684\u65b9\u6cd5\uff1a\r\n     * \uf0d8 Set keySet()\uff1a\u8fd4\u56de\u6240\u6709key\u6784\u6210\u7684Set\u96c6\u5408\r\n     * \uf0d8 Collection values()\uff1a\u8fd4\u56de\u6240\u6709value\u6784\u6210\u7684Collection\u96c6\u5408\r\n     * \uf0d8 Set entrySet()\uff1a\u8fd4\u56de\u6240\u6709key-value\u5bf9\u6784\u6210\u7684Set\u96c6\u5408\r\n\r\n     *\/\r\n    @Test\r\n    public void test5(){\r\n        Map map = new HashMap();\r\n        map.put(\"AA\",123);\r\n        map.put(45,1234);\r\n        map.put(\"BB\",56);\r\n\r\n        \/\/\u904d\u5386\u6240\u6709\u7684key\u96c6\uff1akeySet()\r\n        Set set = map.keySet();\r\n        Iterator iterator = set.iterator();\r\n        while (iterator.hasNext()){\r\n            System.out.println(iterator.next());\r\n        }\r\n        System.out.println();\r\n        \/\/\u904d\u5386\u6240\u6709\u7684value\u96c6\uff1avalues()\r\n        Collection values = map.values();\r\n        for (Object obj : values){\r\n            System.out.println(obj);\r\n        }\r\n        System.out.println();\r\n        \/\/\u65b9\u5f0f\u4e00\uff1aentrySet():\r\n        \/\/\u904d\u5386\u6240\u6709\u7684key-value\r\n        Set entrySet = map.entrySet();\r\n        Iterator iterator1 = entrySet.iterator();\r\n        while (iterator1.hasNext()){\r\n            Object obj = iterator1.next();\r\n            \/\/entrySet\u96c6\u5408\u4e2d\u7684\u5143\u7d20\u90fd\u662fentry\r\n            Map.Entry entry = (Map.Entry) obj;\r\n            System.out.println(entry.getKey() + \"---->\" + entry.getValue());\r\n\r\n        }\r\n        \/\/\u65b9\u5f0f\u4e8c\uff1a\r\n        Set keySet = map.keySet();\r\n        Iterator iterator2 = keySet.iterator();\r\n        while (iterator2.hasNext()){\r\n            Object key = iterator2.next();\r\n            Object value = map.get(key);\r\n            System.out.println(key + \"=====\" + value);\r\n        }\r\n\r\n    }\r\n\r\n    \/*\r\n     *  \u5143\u7d20\u67e5\u8be2\u7684\u64cd\u4f5c\uff1a\r\n     * \uf0d8 Object get(Object key)\uff1a\u83b7\u53d6\u6307\u5b9akey\u5bf9\u5e94\u7684value\r\n     * \uf0d8 boolean containsKey(Object key)\uff1a\u662f\u5426\u5305\u542b\u6307\u5b9a\u7684key\r\n     * \uf0d8 boolean containsValue(Object value)\uff1a\u662f\u5426\u5305\u542b\u6307\u5b9a\u7684value\r\n     * \uf0d8 int size()\uff1a\u8fd4\u56demap\u4e2dkey-value\u5bf9\u7684\u4e2a\u6570\r\n     * \uf0d8 boolean isEmpty()\uff1a\u5224\u65ad\u5f53\u524dmap\u662f\u5426\u4e3a\u7a7a\r\n     * \uf0d8 boolean equals(Object obj)\uff1a\u5224\u65ad\u5f53\u524dmap\u548c\u53c2\u6570\u5bf9\u8c61obj\u662f\u5426\u76f8\u7b49\r\n\r\n     *\/\r\n    @Test\r\n    public void test4(){\r\n        Map map = new HashMap();\r\n        map.put(\"AA\",123);\r\n        map.put(45,123);\r\n        map.put(\"BB\",56);\r\n        \/\/ Object get(Object key)\r\n        System.out.println(map.get(45));\r\n\r\n        boolean isExist = map.containsKey(\"BB\");\r\n        System.out.println(isExist);\r\n\r\n        isExist = map.containsValue(123);\r\n        System.out.println(isExist);\r\n\r\n        map.clear();\r\n\r\n        System.out.println(map.isEmpty());\r\n\r\n\r\n    }\r\n\r\n    \/*\r\n     *  \u6dfb\u52a0\u3001\u5220\u9664\u3001\u4fee\u6539\u64cd\u4f5c\uff1a\r\n     * \uf0d8 Object put(Object key,Object value)\uff1a\u5c06\u6307\u5b9akey-value\u6dfb\u52a0\u5230(\u6216\u4fee\u6539)\u5f53\u524dmap\u5bf9\u8c61\u4e2d\r\n     * \uf0d8 void putAll(Map m):\u5c06m\u4e2d\u7684\u6240\u6709key-value\u5bf9\u5b58\u653e\u5230\u5f53\u524dmap\u4e2d\r\n     * \uf0d8 Object remove(Object key)\uff1a\u79fb\u9664\u6307\u5b9akey\u7684key-value\u5bf9\uff0c\u5e76\u8fd4\u56devalue\r\n     * \uf0d8 void clear()\uff1a\u6e05\u7a7a\u5f53\u524dmap\u4e2d\u7684\u6240\u6709\u6570\u636e\r\n\r\n     *\/\r\n    @Test\r\n    public void test3(){\r\n        Map map = new HashMap();\r\n        \/\/\u6dfb\u52a0\r\n        map.put(\"AA\",123);\r\n        map.put(45,123);\r\n        map.put(\"BB\",56);\r\n        \/\/\u4fee\u6539\r\n        map.put(\"AA\",87);\r\n\r\n        System.out.println(map);\r\n\r\n        Map map1 = new HashMap();\r\n        map1.put(\"CC\",123);\r\n        map1.put(\"DD\",123);\r\n        map.putAll(map1);\r\n\r\n        System.out.println(map);\r\n\r\n        \/\/remove(Object key)\r\n        Object value = map.remove(\"CCC\");\r\n        System.out.println(value);\r\n        System.out.println(map);\r\n\r\n        \/\/clear()\r\n        map.clear();\/\/\u4e0emap = null\u64cd\u4f5c\u4e0d\u540c\r\n        System.out.println(map.size());\r\n        System.out.println(map);\r\n    }\r\n\r\n    @Test\r\n    public void test2(){\r\n        Map map = new HashMap();\r\n        map = new LinkedHashMap();\r\n\r\n        map.put(123,\"AA\");\r\n        map.put(345,\"BB\");\r\n        map.put(12,\"CC\");\r\n\r\n        System.out.println(map);\r\n    }\r\n\r\n    @Test\r\n    public void test1(){\r\n        Map map = new HashMap();\r\n\/\/        map = new Hashtable();\r\n        map.put(null,123);\r\n    }\r\n}\r\n<\/code><\/pre>\n\n\n\n<p>jdk 7 HashMap\u6e90\u7801<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/*\r\n * Copyright (c) 1997, 2010, Oracle and\/or its affiliates. All rights reserved.\r\n * ORACLE PROPRIETARY\/CONFIDENTIAL. Use is subject to license terms.\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\/\r\n\r\npackage top.qaqaq.java.P548.src7;\r\nimport java.io.*;\r\nimport java.util.*;\r\n\r\n\/**\r\n * Hash table based implementation of the &lt;tt>Map&lt;\/tt> interface.  This\r\n * implementation provides all of the optional map operations, and permits\r\n * &lt;tt>null&lt;\/tt> values and the &lt;tt>null&lt;\/tt> key.  (The &lt;tt>HashMap&lt;\/tt>\r\n * class is roughly equivalent to &lt;tt>Hashtable&lt;\/tt>, except that it is\r\n * unsynchronized and permits nulls.)  This class makes no guarantees as to\r\n * the order of the map; in particular, it does not guarantee that the order\r\n * will remain constant over time.\r\n *\r\n * &lt;p>This implementation provides constant-time performance for the basic\r\n * operations (&lt;tt>get&lt;\/tt> and &lt;tt>put&lt;\/tt>), assuming the hash function\r\n * disperses the elements properly among the buckets.  Iteration over\r\n * collection views requires time proportional to the \"capacity\" of the\r\n * &lt;tt>HashMap&lt;\/tt> instance (the number of buckets) plus its size (the number\r\n * of key-value mappings).  Thus, it's very important not to set the initial\r\n * capacity too high (or the load factor too low) if iteration performance is\r\n * important.\r\n *\r\n * &lt;p>An instance of &lt;tt>HashMap&lt;\/tt> has two parameters that affect its\r\n * performance: &lt;i>initial capacity&lt;\/i> and &lt;i>load factor&lt;\/i>.  The\r\n * &lt;i>capacity&lt;\/i> is the number of buckets in the hash table, and the initial\r\n * capacity is simply the capacity at the time the hash table is created.  The\r\n * &lt;i>load factor&lt;\/i> is a measure of how full the hash table is allowed to\r\n * get before its capacity is automatically increased.  When the number of\r\n * entries in the hash table exceeds the product of the load factor and the\r\n * current capacity, the hash table is &lt;i>rehashed&lt;\/i> (that is, internal data\r\n * structures are rebuilt) so that the hash table has approximately twice the\r\n * number of buckets.\r\n *\r\n * &lt;p>As a general rule, the default load factor (.75) offers a good tradeoff\r\n * between time and space costs.  Higher values decrease the space overhead\r\n * but increase the lookup cost (reflected in most of the operations of the\r\n * &lt;tt>HashMap&lt;\/tt> class, including &lt;tt>get&lt;\/tt> and &lt;tt>put&lt;\/tt>).  The\r\n * expected number of entries in the map and its load factor should be taken\r\n * into account when setting its initial capacity, so as to minimize the\r\n * number of rehash operations.  If the initial capacity is greater\r\n * than the maximum number of entries divided by the load factor, no\r\n * rehash operations will ever occur.\r\n *\r\n * &lt;p>If many mappings are to be stored in a &lt;tt>HashMap&lt;\/tt> instance,\r\n * creating it with a sufficiently large capacity will allow the mappings to\r\n * be stored more efficiently than letting it perform automatic rehashing as\r\n * needed to grow the table.\r\n *\r\n * &lt;p>&lt;strong>Note that this implementation is not synchronized.&lt;\/strong>\r\n * If multiple threads access a hash map concurrently, and at least one of\r\n * the threads modifies the map structurally, it &lt;i>must&lt;\/i> be\r\n * synchronized externally.  (A structural modification is any operation\r\n * that adds or deletes one or more mappings; merely changing the value\r\n * associated with a key that an instance already contains is not a\r\n * structural modification.)  This is typically accomplished by\r\n * synchronizing on some object that naturally encapsulates the map.\r\n *\r\n * If no such object exists, the map should be \"wrapped\" using the\r\n * {@link Collections#synchronizedMap Collections.synchronizedMap}\r\n * method.  This is best done at creation time, to prevent accidental\r\n * unsynchronized access to the map:&lt;pre>\r\n *   Map m = Collections.synchronizedMap(new HashMap(...));&lt;\/pre>\r\n *\r\n * &lt;p>The iterators returned by all of this class's \"collection view methods\"\r\n * are &lt;i>fail-fast&lt;\/i>: if the map is structurally modified at any time after\r\n * the iterator is created, in any way except through the iterator's own\r\n * &lt;tt>remove&lt;\/tt> method, the iterator will throw a\r\n * {@link ConcurrentModificationException}.  Thus, in the face of concurrent\r\n * modification, the iterator fails quickly and cleanly, rather than risking\r\n * arbitrary, non-deterministic behavior at an undetermined time in the\r\n * future.\r\n *\r\n * &lt;p>Note that the fail-fast behavior of an iterator cannot be guaranteed\r\n * as it is, generally speaking, impossible to make any hard guarantees in the\r\n * presence of unsynchronized concurrent modification.  Fail-fast iterators\r\n * throw &lt;tt>ConcurrentModificationException&lt;\/tt> on a best-effort basis.\r\n * Therefore, it would be wrong to write a program that depended on this\r\n * exception for its correctness: &lt;i>the fail-fast behavior of iterators\r\n * should be used only to detect bugs.&lt;\/i>\r\n *\r\n * &lt;p>This class is a member of the\r\n * &lt;a href=\"{@docRoot}\/..\/technotes\/guides\/collections\/index.html\">\r\n * Java Collections Framework&lt;\/a>.\r\n *\r\n * @param &lt;K> the type of keys maintained by this map\r\n * @param &lt;V> the type of mapped values\r\n *\r\n * @author  Doug Lea\r\n * @author  Josh Bloch\r\n * @author  Arthur van Hoff\r\n * @author  Neal Gafter\r\n * @see     Object#hashCode()\r\n * @see     Collection\r\n * @see     Map\r\n * @see     TreeMap\r\n * @see     Hashtable\r\n * @since   1.2\r\n *\/\r\n\r\npublic class HashMap&lt;K,V>\r\n    extends AbstractMap&lt;K,V>\r\n    implements Map&lt;K,V>, Cloneable, Serializable\r\n{\r\n\r\n    \/**\r\n     * The default initial capacity - MUST be a power of two.\r\n     *\/\r\n    static final int DEFAULT_INITIAL_CAPACITY = 16;\r\n\r\n    \/**\r\n     * The maximum capacity, used if a higher value is implicitly specified\r\n     * by either of the constructors with arguments.\r\n     * MUST be a power of two &lt;= 1&lt;&lt;30.\r\n     *\/\r\n    static final int MAXIMUM_CAPACITY = 1 &lt;&lt; 30;\r\n\r\n    \/**\r\n     * The load factor used when none specified in constructor.\r\n     *\/\r\n    static final float DEFAULT_LOAD_FACTOR = 0.75f;\r\n\r\n    \/**\r\n     * The table, resized as necessary. Length MUST Always be a power of two.\r\n     *\/\r\n    transient Entry&lt;K,V>&#91;] table;\r\n\r\n    \/**\r\n     * The number of key-value mappings contained in this map.\r\n     *\/\r\n    transient int size;\r\n\r\n    \/**\r\n     * The next size value at which to resize (capacity * load factor).\r\n     * @serial\r\n     *\/\r\n    int threshold;\r\n\r\n    \/**\r\n     * The load factor for the hash table.\r\n     *\r\n     * @serial\r\n     *\/\r\n    final float loadFactor;\r\n\r\n    \/**\r\n     * The number of times this HashMap has been structurally modified\r\n     * Structural modifications are those that change the number of mappings in\r\n     * the HashMap or otherwise modify its internal structure (e.g.,\r\n     * rehash).  This field is used to make iterators on Collection-views of\r\n     * the HashMap fail-fast.  (See ConcurrentModificationException).\r\n     *\/\r\n    transient int modCount;\r\n\r\n    \/**\r\n     * The default threshold of map capacity above which alternative hashing is\r\n     * used for String keys. Alternative hashing reduces the incidence of\r\n     * collisions due to weak hash code calculation for String keys.\r\n     * &lt;p\/>\r\n     * This value may be overridden by defining the system property\r\n     * {@code jdk.map.althashing.threshold}. A property value of {@code 1}\r\n     * forces alternative hashing to be used at all times whereas\r\n     * {@code -1} value ensures that alternative hashing is never used.\r\n     *\/\r\n    static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE;\r\n\r\n    \/**\r\n     * holds values which can't be initialized until after VM is booted.\r\n     *\/\r\n    private static class Holder {\r\n\r\n            \/\/ Unsafe mechanics\r\n        \/**\r\n         * Unsafe utilities\r\n         *\/\r\n        static final sun.misc.Unsafe UNSAFE;\r\n\r\n        \/**\r\n         * Offset of \"final\" hashSeed field we must set in readObject() method.\r\n         *\/\r\n        static final long HASHSEED_OFFSET;\r\n\r\n        \/**\r\n         * Table capacity above which to switch to use alternative hashing.\r\n         *\/\r\n        static final int ALTERNATIVE_HASHING_THRESHOLD;\r\n\r\n        static {\r\n            String altThreshold = java.security.AccessController.doPrivileged(\r\n                new sun.security.action.GetPropertyAction(\r\n                    \"jdk.map.althashing.threshold\"));\r\n\r\n            int threshold;\r\n            try {\r\n                threshold = (null != altThreshold)\r\n                        ? Integer.parseInt(altThreshold)\r\n                        : ALTERNATIVE_HASHING_THRESHOLD_DEFAULT;\r\n\r\n                \/\/ disable alternative hashing if -1\r\n                if (threshold == -1) {\r\n                    threshold = Integer.MAX_VALUE;\r\n                }\r\n\r\n                if (threshold &lt; 0) {\r\n                    throw new IllegalArgumentException(\"value must be positive integer.\");\r\n                }\r\n            } catch(IllegalArgumentException failed) {\r\n                throw new Error(\"Illegal value for 'jdk.map.althashing.threshold'\", failed);\r\n            }\r\n            ALTERNATIVE_HASHING_THRESHOLD = threshold;\r\n\r\n            try {\r\n                UNSAFE = sun.misc.Unsafe.getUnsafe();\r\n                HASHSEED_OFFSET = UNSAFE.objectFieldOffset(\r\n                    HashMap.class.getDeclaredField(\"hashSeed\"));\r\n            } catch (NoSuchFieldException | SecurityException e) {\r\n                throw new Error(\"Failed to record hashSeed offset\", e);\r\n            }\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * If {@code true} then perform alternative hashing of String keys to reduce\r\n     * the incidence of collisions due to weak hash code calculation.\r\n     *\/\r\n    transient boolean useAltHashing;\r\n\r\n    \/**\r\n     * A randomizing value associated with this instance that is applied to\r\n     * hash code of keys to make hash collisions harder to find.\r\n     *\/\r\n    transient final int hashSeed = sun.misc.Hashing.randomHashSeed(this);\r\n\r\n    \/**\r\n     * Constructs an empty &lt;tt>HashMap&lt;\/tt> with the specified initial\r\n     * capacity and load factor.\r\n     *\r\n     * @param  initialCapacity the initial capacity\r\n     * @param  loadFactor      the load factor\r\n     * @throws IllegalArgumentException if the initial capacity is negative\r\n     *         or the load factor is nonpositive\r\n     *\/\r\n    public HashMap(int initialCapacity, float loadFactor) {\r\n        if (initialCapacity &lt; 0)\r\n            throw new IllegalArgumentException(\"Illegal initial capacity: \" +\r\n                                               initialCapacity);\r\n        if (initialCapacity > MAXIMUM_CAPACITY)\r\n            initialCapacity = MAXIMUM_CAPACITY;\r\n        if (loadFactor &lt;= 0 || Float.isNaN(loadFactor))\r\n            throw new IllegalArgumentException(\"Illegal load factor: \" +\r\n                                               loadFactor);\r\n\r\n        \/\/ Find a power of 2 >= initialCapacity\r\n        int capacity = 1;\r\n        while (capacity &lt; initialCapacity)\r\n            capacity &lt;&lt;= 1;\r\n\r\n        this.loadFactor = loadFactor;\r\n        threshold = (int)Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);\r\n        table = new Entry&#91;capacity];\r\n        useAltHashing = sun.misc.VM.isBooted() &amp;&amp;\r\n                (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);\r\n        init();\r\n    }\r\n\r\n    \/**\r\n     * Constructs an empty &lt;tt>HashMap&lt;\/tt> with the specified initial\r\n     * capacity and the default load factor (0.75).\r\n     *\r\n     * @param  initialCapacity the initial capacity.\r\n     * @throws IllegalArgumentException if the initial capacity is negative.\r\n     *\/\r\n    public HashMap(int initialCapacity) {\r\n        this(initialCapacity, DEFAULT_LOAD_FACTOR);\r\n    }\r\n\r\n    \/**\r\n     * Constructs an empty &lt;tt>HashMap&lt;\/tt> with the default initial capacity\r\n     * (16) and the default load factor (0.75).\r\n     *\/\r\n    public HashMap() {\r\n        this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);\r\n    }\r\n\r\n    \/**\r\n     * Constructs a new &lt;tt>HashMap&lt;\/tt> with the same mappings as the\r\n     * specified &lt;tt>Map&lt;\/tt>.  The &lt;tt>HashMap&lt;\/tt> is created with\r\n     * default load factor (0.75) and an initial capacity sufficient to\r\n     * hold the mappings in the specified &lt;tt>Map&lt;\/tt>.\r\n     *\r\n     * @param   m the map whose mappings are to be placed in this map\r\n     * @throws  NullPointerException if the specified map is null\r\n     *\/\r\n    public HashMap(Map&lt;? extends K, ? extends V> m) {\r\n        this(Math.max((int) (m.size() \/ DEFAULT_LOAD_FACTOR) + 1,\r\n                      DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);\r\n        putAllForCreate(m);\r\n    }\r\n\r\n    \/\/ internal utilities\r\n\r\n    \/**\r\n     * Initialization hook for subclasses. This method is called\r\n     * in all constructors and pseudo-constructors (clone, readObject)\r\n     * after HashMap has been initialized but before any entries have\r\n     * been inserted.  (In the absence of this method, readObject would\r\n     * require explicit knowledge of subclasses.)\r\n     *\/\r\n    void init() {\r\n    }\r\n\r\n    \/**\r\n     * Retrieve object hash code and applies a supplemental hash function to the\r\n     * result hash, which defends against poor quality hash functions.  This is\r\n     * critical because HashMap uses power-of-two length hash tables, that\r\n     * otherwise encounter collisions for hashCodes that do not differ\r\n     * in lower bits. Note: Null keys always map to hash 0, thus index 0.\r\n     *\/\r\n    final int hash(Object k) {\r\n        int h = 0;\r\n        if (useAltHashing) {\r\n            if (k instanceof String) {\r\n                return sun.misc.Hashing.stringHash32((String) k);\r\n            }\r\n            h = hashSeed;\r\n        }\r\n\r\n        h ^= k.hashCode();\r\n\r\n        \/\/ This function ensures that hashCodes that differ only by\r\n        \/\/ constant multiples at each bit position have a bounded\r\n        \/\/ number of collisions (approximately 8 at default load factor).\r\n        h ^= (h >>> 20) ^ (h >>> 12);\r\n        return h ^ (h >>> 7) ^ (h >>> 4);\r\n    }\r\n\r\n    \/**\r\n     * Returns index for hash code h.\r\n     *\/\r\n    static int indexFor(int h, int length) {\r\n        return h &amp; (length-1);\r\n    }\r\n\r\n    \/**\r\n     * Returns the number of key-value mappings in this map.\r\n     *\r\n     * @return the number of key-value mappings in this map\r\n     *\/\r\n    public int size() {\r\n        return size;\r\n    }\r\n\r\n    \/**\r\n     * Returns &lt;tt>true&lt;\/tt> if this map contains no key-value mappings.\r\n     *\r\n     * @return &lt;tt>true&lt;\/tt> if this map contains no key-value mappings\r\n     *\/\r\n    public boolean isEmpty() {\r\n        return size == 0;\r\n    }\r\n\r\n    \/**\r\n     * Returns the value to which the specified key is mapped,\r\n     * or {@code null} if this map contains no mapping for the key.\r\n     *\r\n     * &lt;p>More formally, if this map contains a mapping from a key\r\n     * {@code k} to a value {@code v} such that {@code (key==null ? k==null :\r\n     * key.equals(k))}, then this method returns {@code v}; otherwise\r\n     * it returns {@code null}.  (There can be at most one such mapping.)\r\n     *\r\n     * &lt;p>A return value of {@code null} does not &lt;i>necessarily&lt;\/i>\r\n     * indicate that the map contains no mapping for the key; it's also\r\n     * possible that the map explicitly maps the key to {@code null}.\r\n     * The {@link #containsKey containsKey} operation may be used to\r\n     * distinguish these two cases.\r\n     *\r\n     * @see #put(Object, Object)\r\n     *\/\r\n    public V get(Object key) {\r\n        if (key == null)\r\n            return getForNullKey();\r\n        Entry&lt;K,V> entry = getEntry(key);\r\n\r\n        return null == entry ? null : entry.getValue();\r\n    }\r\n\r\n    \/**\r\n     * Offloaded version of get() to look up null keys.  Null keys map\r\n     * to index 0.  This null case is split out into separate methods\r\n     * for the sake of performance in the two most commonly used\r\n     * operations (get and put), but incorporated with conditionals in\r\n     * others.\r\n     *\/\r\n    private V getForNullKey() {\r\n        for (Entry&lt;K,V> e = table&#91;0]; e != null; e = e.next) {\r\n            if (e.key == null)\r\n                return e.value;\r\n        }\r\n        return null;\r\n    }\r\n\r\n    \/**\r\n     * Returns &lt;tt>true&lt;\/tt> if this map contains a mapping for the\r\n     * specified key.\r\n     *\r\n     * @param   key   The key whose presence in this map is to be tested\r\n     * @return &lt;tt>true&lt;\/tt> if this map contains a mapping for the specified\r\n     * key.\r\n     *\/\r\n    public boolean containsKey(Object key) {\r\n        return getEntry(key) != null;\r\n    }\r\n\r\n    \/**\r\n     * Returns the entry associated with the specified key in the\r\n     * HashMap.  Returns null if the HashMap contains no mapping\r\n     * for the key.\r\n     *\/\r\n    final Entry&lt;K,V> getEntry(Object key) {\r\n        int hash = (key == null) ? 0 : hash(key);\r\n        for (Entry&lt;K,V> e = table&#91;indexFor(hash, table.length)];\r\n             e != null;\r\n             e = e.next) {\r\n            Object k;\r\n            if (e.hash == hash &amp;&amp;\r\n                ((k = e.key) == key || (key != null &amp;&amp; key.equals(k))))\r\n                return e;\r\n        }\r\n        return null;\r\n    }\r\n\r\n\r\n    \/**\r\n     * Associates the specified value with the specified key in this map.\r\n     * If the map previously contained a mapping for the key, the old\r\n     * value is replaced.\r\n     *\r\n     * @param key key with which the specified value is to be associated\r\n     * @param value value to be associated with the specified key\r\n     * @return the previous value associated with &lt;tt>key&lt;\/tt>, or\r\n     *         &lt;tt>null&lt;\/tt> if there was no mapping for &lt;tt>key&lt;\/tt>.\r\n     *         (A &lt;tt>null&lt;\/tt> return can also indicate that the map\r\n     *         previously associated &lt;tt>null&lt;\/tt> with &lt;tt>key&lt;\/tt>.)\r\n     *\/\r\n    public V put(K key, V value) {\r\n        if (key == null)\r\n            return putForNullKey(value);\r\n        int hash = hash(key);\r\n        int i = indexFor(hash, table.length);\r\n        for (Entry&lt;K,V> e = table&#91;i]; e != null; e = e.next) {\r\n            Object k;\r\n            if (e.hash == hash &amp;&amp; ((k = e.key) == key || key.equals(k))) {\r\n                V oldValue = e.value;\r\n                e.value = value;\r\n                e.recordAccess(this);\r\n                return oldValue;\r\n            }\r\n        }\r\n\r\n        modCount++;\r\n        addEntry(hash, key, value, i);\r\n        return null;\r\n    }\r\n\r\n    \/**\r\n     * Offloaded version of put for null keys\r\n     *\/\r\n    private V putForNullKey(V value) {\r\n        for (Entry&lt;K,V> e = table&#91;0]; e != null; e = e.next) {\r\n            if (e.key == null) {\r\n                V oldValue = e.value;\r\n                e.value = value;\r\n                e.recordAccess(this);\r\n                return oldValue;\r\n            }\r\n        }\r\n        modCount++;\r\n        addEntry(0, null, value, 0);\r\n        return null;\r\n    }\r\n\r\n    \/**\r\n     * This method is used instead of put by constructors and\r\n     * pseudoconstructors (clone, readObject).  It does not resize the table,\r\n     * check for comodification, etc.  It calls createEntry rather than\r\n     * addEntry.\r\n     *\/\r\n    private void putForCreate(K key, V value) {\r\n        int hash = null == key ? 0 : hash(key);\r\n        int i = indexFor(hash, table.length);\r\n\r\n        \/**\r\n         * Look for preexisting entry for key.  This will never happen for\r\n         * clone or deserialize.  It will only happen for construction if the\r\n         * input Map is a sorted map whose ordering is inconsistent w\/ equals.\r\n         *\/\r\n        for (Entry&lt;K,V> e = table&#91;i]; e != null; e = e.next) {\r\n            Object k;\r\n            if (e.hash == hash &amp;&amp;\r\n                ((k = e.key) == key || (key != null &amp;&amp; key.equals(k)))) {\r\n                e.value = value;\r\n                return;\r\n            }\r\n        }\r\n\r\n        createEntry(hash, key, value, i);\r\n    }\r\n\r\n    private void putAllForCreate(Map&lt;? extends K, ? extends V> m) {\r\n        for (Map.Entry&lt;? extends K, ? extends V> e : m.entrySet())\r\n            putForCreate(e.getKey(), e.getValue());\r\n    }\r\n\r\n    \/**\r\n     * Rehashes the contents of this map into a new array with a\r\n     * larger capacity.  This method is called automatically when the\r\n     * number of keys in this map reaches its threshold.\r\n     *\r\n     * If current capacity is MAXIMUM_CAPACITY, this method does not\r\n     * resize the map, but sets threshold to Integer.MAX_VALUE.\r\n     * This has the effect of preventing future calls.\r\n     *\r\n     * @param newCapacity the new capacity, MUST be a power of two;\r\n     *        must be greater than current capacity unless current\r\n     *        capacity is MAXIMUM_CAPACITY (in which case value\r\n     *        is irrelevant).\r\n     *\/\r\n    void resize(int newCapacity) {\r\n        Entry&#91;] oldTable = table;\r\n        int oldCapacity = oldTable.length;\r\n        if (oldCapacity == MAXIMUM_CAPACITY) {\r\n            threshold = Integer.MAX_VALUE;\r\n            return;\r\n        }\r\n\r\n        Entry&#91;] newTable = new Entry&#91;newCapacity];\r\n        boolean oldAltHashing = useAltHashing;\r\n        useAltHashing |= sun.misc.VM.isBooted() &amp;&amp;\r\n                (newCapacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);\r\n        boolean rehash = oldAltHashing ^ useAltHashing;\r\n        transfer(newTable, rehash);\r\n        table = newTable;\r\n        threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);\r\n    }\r\n\r\n    \/**\r\n     * Transfers all entries from current table to newTable.\r\n     *\/\r\n    void transfer(Entry&#91;] newTable, boolean rehash) {\r\n        int newCapacity = newTable.length;\r\n        for (Entry&lt;K,V> e : table) {\r\n            while(null != e) {\r\n                Entry&lt;K,V> next = e.next;\r\n                if (rehash) {\r\n                    e.hash = null == e.key ? 0 : hash(e.key);\r\n                }\r\n                int i = indexFor(e.hash, newCapacity);\r\n                e.next = newTable&#91;i];\r\n                newTable&#91;i] = e;\r\n                e = next;\r\n            }\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Copies all of the mappings from the specified map to this map.\r\n     * These mappings will replace any mappings that this map had for\r\n     * any of the keys currently in the specified map.\r\n     *\r\n     * @param m mappings to be stored in this map\r\n     * @throws NullPointerException if the specified map is null\r\n     *\/\r\n    public void putAll(Map&lt;? extends K, ? extends V> m) {\r\n        int numKeysToBeAdded = m.size();\r\n        if (numKeysToBeAdded == 0)\r\n            return;\r\n\r\n        \/*\r\n         * Expand the map if the map if the number of mappings to be added\r\n         * is greater than or equal to threshold.  This is conservative; the\r\n         * obvious condition is (m.size() + size) >= threshold, but this\r\n         * condition could result in a map with twice the appropriate capacity,\r\n         * if the keys to be added overlap with the keys already in this map.\r\n         * By using the conservative calculation, we subject ourself\r\n         * to at most one extra resize.\r\n         *\/\r\n        if (numKeysToBeAdded > threshold) {\r\n            int targetCapacity = (int)(numKeysToBeAdded \/ loadFactor + 1);\r\n            if (targetCapacity > MAXIMUM_CAPACITY)\r\n                targetCapacity = MAXIMUM_CAPACITY;\r\n            int newCapacity = table.length;\r\n            while (newCapacity &lt; targetCapacity)\r\n                newCapacity &lt;&lt;= 1;\r\n            if (newCapacity > table.length)\r\n                resize(newCapacity);\r\n        }\r\n\r\n        for (Map.Entry&lt;? extends K, ? extends V> e : m.entrySet())\r\n            put(e.getKey(), e.getValue());\r\n    }\r\n\r\n    \/**\r\n     * Removes the mapping for the specified key from this map if present.\r\n     *\r\n     * @param  key key whose mapping is to be removed from the map\r\n     * @return the previous value associated with &lt;tt>key&lt;\/tt>, or\r\n     *         &lt;tt>null&lt;\/tt> if there was no mapping for &lt;tt>key&lt;\/tt>.\r\n     *         (A &lt;tt>null&lt;\/tt> return can also indicate that the map\r\n     *         previously associated &lt;tt>null&lt;\/tt> with &lt;tt>key&lt;\/tt>.)\r\n     *\/\r\n    public V remove(Object key) {\r\n        Entry&lt;K,V> e = removeEntryForKey(key);\r\n        return (e == null ? null : e.value);\r\n    }\r\n\r\n    \/**\r\n     * Removes and returns the entry associated with the specified key\r\n     * in the HashMap.  Returns null if the HashMap contains no mapping\r\n     * for this key.\r\n     *\/\r\n    final Entry&lt;K,V> removeEntryForKey(Object key) {\r\n        int hash = (key == null) ? 0 : hash(key);\r\n        int i = indexFor(hash, table.length);\r\n        Entry&lt;K,V> prev = table&#91;i];\r\n        Entry&lt;K,V> e = prev;\r\n\r\n        while (e != null) {\r\n            Entry&lt;K,V> next = e.next;\r\n            Object k;\r\n            if (e.hash == hash &amp;&amp;\r\n                ((k = e.key) == key || (key != null &amp;&amp; key.equals(k)))) {\r\n                modCount++;\r\n                size--;\r\n                if (prev == e)\r\n                    table&#91;i] = next;\r\n                else\r\n                    prev.next = next;\r\n                e.recordRemoval(this);\r\n                return e;\r\n            }\r\n            prev = e;\r\n            e = next;\r\n        }\r\n\r\n        return e;\r\n    }\r\n\r\n    \/**\r\n     * Special version of remove for EntrySet using {@code Map.Entry.equals()}\r\n     * for matching.\r\n     *\/\r\n    final Entry&lt;K,V> removeMapping(Object o) {\r\n        if (!(o instanceof Map.Entry))\r\n            return null;\r\n\r\n        Map.Entry&lt;K,V> entry = (Map.Entry&lt;K,V>) o;\r\n        Object key = entry.getKey();\r\n        int hash = (key == null) ? 0 : hash(key);\r\n        int i = indexFor(hash, table.length);\r\n        Entry&lt;K,V> prev = table&#91;i];\r\n        Entry&lt;K,V> e = prev;\r\n\r\n        while (e != null) {\r\n            Entry&lt;K,V> next = e.next;\r\n            if (e.hash == hash &amp;&amp; e.equals(entry)) {\r\n                modCount++;\r\n                size--;\r\n                if (prev == e)\r\n                    table&#91;i] = next;\r\n                else\r\n                    prev.next = next;\r\n                e.recordRemoval(this);\r\n                return e;\r\n            }\r\n            prev = e;\r\n            e = next;\r\n        }\r\n\r\n        return e;\r\n    }\r\n\r\n    \/**\r\n     * Removes all of the mappings from this map.\r\n     * The map will be empty after this call returns.\r\n     *\/\r\n    public void clear() {\r\n        modCount++;\r\n        Entry&#91;] tab = table;\r\n        for (int i = 0; i &lt; tab.length; i++)\r\n            tab&#91;i] = null;\r\n        size = 0;\r\n    }\r\n\r\n    \/**\r\n     * Returns &lt;tt>true&lt;\/tt> if this map maps one or more keys to the\r\n     * specified value.\r\n     *\r\n     * @param value value whose presence in this map is to be tested\r\n     * @return &lt;tt>true&lt;\/tt> if this map maps one or more keys to the\r\n     *         specified value\r\n     *\/\r\n    public boolean containsValue(Object value) {\r\n        if (value == null)\r\n            return containsNullValue();\r\n\r\n        Entry&#91;] tab = table;\r\n        for (int i = 0; i &lt; tab.length ; i++)\r\n            for (Entry e = tab&#91;i] ; e != null ; e = e.next)\r\n                if (value.equals(e.value))\r\n                    return true;\r\n        return false;\r\n    }\r\n\r\n    \/**\r\n     * Special-case code for containsValue with null argument\r\n     *\/\r\n    private boolean containsNullValue() {\r\n        Entry&#91;] tab = table;\r\n        for (int i = 0; i &lt; tab.length ; i++)\r\n            for (Entry e = tab&#91;i] ; e != null ; e = e.next)\r\n                if (e.value == null)\r\n                    return true;\r\n        return false;\r\n    }\r\n\r\n    \/**\r\n     * Returns a shallow copy of this &lt;tt>HashMap&lt;\/tt> instance: the keys and\r\n     * values themselves are not cloned.\r\n     *\r\n     * @return a shallow copy of this map\r\n     *\/\r\n    public Object clone() {\r\n        HashMap&lt;K,V> result = null;\r\n        try {\r\n            result = (HashMap&lt;K,V>)super.clone();\r\n        } catch (CloneNotSupportedException e) {\r\n            \/\/ assert false;\r\n        }\r\n        result.table = new Entry&#91;table.length];\r\n        result.entrySet = null;\r\n        result.modCount = 0;\r\n        result.size = 0;\r\n        result.init();\r\n        result.putAllForCreate(this);\r\n\r\n        return result;\r\n    }\r\n\r\n    static class Entry&lt;K,V> implements Map.Entry&lt;K,V> {\r\n        final K key;\r\n        V value;\r\n        Entry&lt;K,V> next;\r\n        int hash;\r\n\r\n        \/**\r\n         * Creates new entry.\r\n         *\/\r\n        Entry(int h, K k, V v, Entry&lt;K,V> n) {\r\n            value = v;\r\n            next = n;\r\n            key = k;\r\n            hash = h;\r\n        }\r\n\r\n        public final K getKey() {\r\n            return key;\r\n        }\r\n\r\n        public final V getValue() {\r\n            return value;\r\n        }\r\n\r\n        public final V setValue(V newValue) {\r\n            V oldValue = value;\r\n            value = newValue;\r\n            return oldValue;\r\n        }\r\n\r\n        public final boolean equals(Object o) {\r\n            if (!(o instanceof Map.Entry))\r\n                return false;\r\n            Map.Entry e = (Map.Entry)o;\r\n            Object k1 = getKey();\r\n            Object k2 = e.getKey();\r\n            if (k1 == k2 || (k1 != null &amp;&amp; k1.equals(k2))) {\r\n                Object v1 = getValue();\r\n                Object v2 = e.getValue();\r\n                if (v1 == v2 || (v1 != null &amp;&amp; v1.equals(v2)))\r\n                    return true;\r\n            }\r\n            return false;\r\n        }\r\n\r\n        public final int hashCode() {\r\n            return (key==null   ? 0 : key.hashCode()) ^\r\n                   (value==null ? 0 : value.hashCode());\r\n        }\r\n\r\n        public final String toString() {\r\n            return getKey() + \"=\" + getValue();\r\n        }\r\n\r\n        \/**\r\n         * This method is invoked whenever the value in an entry is\r\n         * overwritten by an invocation of put(k,v) for a key k that's already\r\n         * in the HashMap.\r\n         *\/\r\n        void recordAccess(HashMap&lt;K,V> m) {\r\n        }\r\n\r\n        \/**\r\n         * This method is invoked whenever the entry is\r\n         * removed from the table.\r\n         *\/\r\n        void recordRemoval(HashMap&lt;K,V> m) {\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Adds a new entry with the specified key, value and hash code to\r\n     * the specified bucket.  It is the responsibility of this\r\n     * method to resize the table if appropriate.\r\n     *\r\n     * Subclass overrides this to alter the behavior of put method.\r\n     *\/\r\n    void addEntry(int hash, K key, V value, int bucketIndex) {\r\n        if ((size >= threshold) &amp;&amp; (null != table&#91;bucketIndex])) {\r\n            resize(2 * table.length);\r\n            hash = (null != key) ? hash(key) : 0;\r\n            bucketIndex = indexFor(hash, table.length);\r\n        }\r\n\r\n        createEntry(hash, key, value, bucketIndex);\r\n    }\r\n\r\n    \/**\r\n     * Like addEntry except that this version is used when creating entries\r\n     * as part of Map construction or \"pseudo-construction\" (cloning,\r\n     * deserialization).  This version needn't worry about resizing the table.\r\n     *\r\n     * Subclass overrides this to alter the behavior of HashMap(Map),\r\n     * clone, and readObject.\r\n     *\/\r\n    void createEntry(int hash, K key, V value, int bucketIndex) {\r\n        Entry&lt;K,V> e = table&#91;bucketIndex];\r\n        table&#91;bucketIndex] = new Entry&lt;>(hash, key, value, e);\r\n        size++;\r\n    }\r\n\r\n    private abstract class HashIterator&lt;E> implements Iterator&lt;E> {\r\n        Entry&lt;K,V> next;        \/\/ next entry to return\r\n        int expectedModCount;   \/\/ For fast-fail\r\n        int index;              \/\/ current slot\r\n        Entry&lt;K,V> current;     \/\/ current entry\r\n\r\n        HashIterator() {\r\n            expectedModCount = modCount;\r\n            if (size > 0) { \/\/ advance to first entry\r\n                Entry&#91;] t = table;\r\n                while (index &lt; t.length &amp;&amp; (next = t&#91;index++]) == null)\r\n                    ;\r\n            }\r\n        }\r\n\r\n        public final boolean hasNext() {\r\n            return next != null;\r\n        }\r\n\r\n        final Entry&lt;K,V> nextEntry() {\r\n            if (modCount != expectedModCount)\r\n                throw new ConcurrentModificationException();\r\n            Entry&lt;K,V> e = next;\r\n            if (e == null)\r\n                throw new NoSuchElementException();\r\n\r\n            if ((next = e.next) == null) {\r\n                Entry&#91;] t = table;\r\n                while (index &lt; t.length &amp;&amp; (next = t&#91;index++]) == null)\r\n                    ;\r\n            }\r\n            current = e;\r\n            return e;\r\n        }\r\n\r\n        public void remove() {\r\n            if (current == null)\r\n                throw new IllegalStateException();\r\n            if (modCount != expectedModCount)\r\n                throw new ConcurrentModificationException();\r\n            Object k = current.key;\r\n            current = null;\r\n            HashMap.this.removeEntryForKey(k);\r\n            expectedModCount = modCount;\r\n        }\r\n    }\r\n\r\n    private final class ValueIterator extends HashIterator&lt;V> {\r\n        public V next() {\r\n            return nextEntry().value;\r\n        }\r\n    }\r\n\r\n    private final class KeyIterator extends HashIterator&lt;K> {\r\n        public K next() {\r\n            return nextEntry().getKey();\r\n        }\r\n    }\r\n\r\n    private final class EntryIterator extends HashIterator&lt;Map.Entry&lt;K,V>> {\r\n        public Map.Entry&lt;K,V> next() {\r\n            return nextEntry();\r\n        }\r\n    }\r\n\r\n    \/\/ Subclass overrides these to alter behavior of views' iterator() method\r\n    Iterator&lt;K> newKeyIterator()   {\r\n        return new KeyIterator();\r\n    }\r\n    Iterator&lt;V> newValueIterator()   {\r\n        return new ValueIterator();\r\n    }\r\n    Iterator&lt;Map.Entry&lt;K,V>> newEntryIterator()   {\r\n        return new EntryIterator();\r\n    }\r\n\r\n\r\n    \/\/ Views\r\n\r\n    private transient Set&lt;Map.Entry&lt;K,V>> entrySet = null;\r\n\r\n    \/**\r\n     * Returns a {@link Set} view of the keys contained in this map.\r\n     * The set is backed by the map, so changes to the map are\r\n     * reflected in the set, and vice-versa.  If the map is modified\r\n     * while an iteration over the set is in progress (except through\r\n     * the iterator's own &lt;tt>remove&lt;\/tt> operation), the results of\r\n     * the iteration are undefined.  The set supports element removal,\r\n     * which removes the corresponding mapping from the map, via the\r\n     * &lt;tt>Iterator.remove&lt;\/tt>, &lt;tt>Set.remove&lt;\/tt>,\r\n     * &lt;tt>removeAll&lt;\/tt>, &lt;tt>retainAll&lt;\/tt>, and &lt;tt>clear&lt;\/tt>\r\n     * operations.  It does not support the &lt;tt>add&lt;\/tt> or &lt;tt>addAll&lt;\/tt>\r\n     * operations.\r\n     *\/\r\n    public Set&lt;K> keySet() {\r\n        Set&lt;K> ks = keySet;\r\n        return (ks != null ? ks : (keySet = new KeySet()));\r\n    }\r\n\r\n    private final class KeySet extends AbstractSet&lt;K> {\r\n        public Iterator&lt;K> iterator() {\r\n            return newKeyIterator();\r\n        }\r\n        public int size() {\r\n            return size;\r\n        }\r\n        public boolean contains(Object o) {\r\n            return containsKey(o);\r\n        }\r\n        public boolean remove(Object o) {\r\n            return HashMap.this.removeEntryForKey(o) != null;\r\n        }\r\n        public void clear() {\r\n            HashMap.this.clear();\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Returns a {@link Collection} view of the values contained in this map.\r\n     * The collection is backed by the map, so changes to the map are\r\n     * reflected in the collection, and vice-versa.  If the map is\r\n     * modified while an iteration over the collection is in progress\r\n     * (except through the iterator's own &lt;tt>remove&lt;\/tt> operation),\r\n     * the results of the iteration are undefined.  The collection\r\n     * supports element removal, which removes the corresponding\r\n     * mapping from the map, via the &lt;tt>Iterator.remove&lt;\/tt>,\r\n     * &lt;tt>Collection.remove&lt;\/tt>, &lt;tt>removeAll&lt;\/tt>,\r\n     * &lt;tt>retainAll&lt;\/tt> and &lt;tt>clear&lt;\/tt> operations.  It does not\r\n     * support the &lt;tt>add&lt;\/tt> or &lt;tt>addAll&lt;\/tt> operations.\r\n     *\/\r\n    public Collection&lt;V> values() {\r\n        Collection&lt;V> vs = values;\r\n        return (vs != null ? vs : (values = new Values()));\r\n    }\r\n\r\n    private final class Values extends AbstractCollection&lt;V> {\r\n        public Iterator&lt;V> iterator() {\r\n            return newValueIterator();\r\n        }\r\n        public int size() {\r\n            return size;\r\n        }\r\n        public boolean contains(Object o) {\r\n            return containsValue(o);\r\n        }\r\n        public void clear() {\r\n            HashMap.this.clear();\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Returns a {@link Set} view of the mappings contained in this map.\r\n     * The set is backed by the map, so changes to the map are\r\n     * reflected in the set, and vice-versa.  If the map is modified\r\n     * while an iteration over the set is in progress (except through\r\n     * the iterator's own &lt;tt>remove&lt;\/tt> operation, or through the\r\n     * &lt;tt>setValue&lt;\/tt> operation on a map entry returned by the\r\n     * iterator) the results of the iteration are undefined.  The set\r\n     * supports element removal, which removes the corresponding\r\n     * mapping from the map, via the &lt;tt>Iterator.remove&lt;\/tt>,\r\n     * &lt;tt>Set.remove&lt;\/tt>, &lt;tt>removeAll&lt;\/tt>, &lt;tt>retainAll&lt;\/tt> and\r\n     * &lt;tt>clear&lt;\/tt> operations.  It does not support the\r\n     * &lt;tt>add&lt;\/tt> or &lt;tt>addAll&lt;\/tt> operations.\r\n     *\r\n     * @return a set view of the mappings contained in this map\r\n     *\/\r\n    public Set&lt;Map.Entry&lt;K,V>> entrySet() {\r\n        return entrySet0();\r\n    }\r\n\r\n    private Set&lt;Map.Entry&lt;K,V>> entrySet0() {\r\n        Set&lt;Map.Entry&lt;K,V>> es = entrySet;\r\n        return es != null ? es : (entrySet = new EntrySet());\r\n    }\r\n\r\n    private final class EntrySet extends AbstractSet&lt;Map.Entry&lt;K,V>> {\r\n        public Iterator&lt;Map.Entry&lt;K,V>> iterator() {\r\n            return newEntryIterator();\r\n        }\r\n        public boolean contains(Object o) {\r\n            if (!(o instanceof Map.Entry))\r\n                return false;\r\n            Map.Entry&lt;K,V> e = (Map.Entry&lt;K,V>) o;\r\n            Entry&lt;K,V> candidate = getEntry(e.getKey());\r\n            return candidate != null &amp;&amp; candidate.equals(e);\r\n        }\r\n        public boolean remove(Object o) {\r\n            return removeMapping(o) != null;\r\n        }\r\n        public int size() {\r\n            return size;\r\n        }\r\n        public void clear() {\r\n            HashMap.this.clear();\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Save the state of the &lt;tt>HashMap&lt;\/tt> instance to a stream (i.e.,\r\n     * serialize it).\r\n     *\r\n     * @serialData The &lt;i>capacity&lt;\/i> of the HashMap (the length of the\r\n     *             bucket array) is emitted (int), followed by the\r\n     *             &lt;i>size&lt;\/i> (an int, the number of key-value\r\n     *             mappings), followed by the key (Object) and value (Object)\r\n     *             for each key-value mapping.  The key-value mappings are\r\n     *             emitted in no particular order.\r\n     *\/\r\n    private void writeObject(ObjectOutputStream s)\r\n        throws IOException\r\n    {\r\n        Iterator&lt;Map.Entry&lt;K,V>> i =\r\n            (size > 0) ? entrySet0().iterator() : null;\r\n\r\n        \/\/ Write out the threshold, loadfactor, and any hidden stuff\r\n        s.defaultWriteObject();\r\n\r\n        \/\/ Write out number of buckets\r\n        s.writeInt(table.length);\r\n\r\n        \/\/ Write out size (number of Mappings)\r\n        s.writeInt(size);\r\n\r\n        \/\/ Write out keys and values (alternating)\r\n        if (size > 0) {\r\n            for(Map.Entry&lt;K,V> e : entrySet0()) {\r\n                s.writeObject(e.getKey());\r\n                s.writeObject(e.getValue());\r\n            }\r\n        }\r\n    }\r\n\r\n    private static final long serialVersionUID = 362498820763181265L;\r\n\r\n    \/**\r\n     * Reconstitute the {@code HashMap} instance from a stream (i.e.,\r\n     * deserialize it).\r\n     *\/\r\n    private void readObject(ObjectInputStream s)\r\n         throws IOException, ClassNotFoundException\r\n    {\r\n        \/\/ Read in the threshold (ignored), loadfactor, and any hidden stuff\r\n        s.defaultReadObject();\r\n        if (loadFactor &lt;= 0 || Float.isNaN(loadFactor))\r\n            throw new InvalidObjectException(\"Illegal load factor: \" +\r\n                                               loadFactor);\r\n\r\n        \/\/ set hashSeed (can only happen after VM boot)\r\n        Holder.UNSAFE.putIntVolatile(this, Holder.HASHSEED_OFFSET,\r\n                sun.misc.Hashing.randomHashSeed(this));\r\n\r\n        \/\/ Read in number of buckets and allocate the bucket array;\r\n        s.readInt(); \/\/ ignored\r\n\r\n        \/\/ Read number of mappings\r\n        int mappings = s.readInt();\r\n        if (mappings &lt; 0)\r\n            throw new InvalidObjectException(\"Illegal mappings count: \" +\r\n                                               mappings);\r\n\r\n        int initialCapacity = (int) Math.min(\r\n                \/\/ capacity chosen by number of mappings\r\n                \/\/ and desired load (if >= 0.25)\r\n                mappings * Math.min(1 \/ loadFactor, 4.0f),\r\n                \/\/ we have limits...\r\n                HashMap.MAXIMUM_CAPACITY);\r\n        int capacity = 1;\r\n        \/\/ find smallest power of two which holds all mappings\r\n        while (capacity &lt; initialCapacity) {\r\n            capacity &lt;&lt;= 1;\r\n        }\r\n\r\n        table = new Entry&#91;capacity];\r\n        threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);\r\n        useAltHashing = sun.misc.VM.isBooted() &amp;&amp;\r\n                (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);\r\n\r\n        init();  \/\/ Give subclass a chance to do its thing.\r\n\r\n        \/\/ Read the keys and values, and put the mappings in the HashMap\r\n        for (int i=0; i&lt;mappings; i++) {\r\n            K key = (K) s.readObject();\r\n            V value = (V) s.readObject();\r\n            putForCreate(key, value);\r\n        }\r\n    }\r\n\r\n    \/\/ These methods are used when serializing HashSets\r\n    int   capacity()     { return table.length; }\r\n    float loadFactor()   { return loadFactor;   }\r\n}\r\n<\/code><\/pre>\n\n\n\n<p>jdk 8 HashMap\u6e90\u7801<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/*\r\n * Copyright (c) 1997, 2021, Oracle and\/or its affiliates. All rights reserved.\r\n * ORACLE PROPRIETARY\/CONFIDENTIAL. Use is subject to license terms.\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\r\n *\/\r\n\r\npackage top.qaqaq.java.P548.src8;\r\n\r\nimport java.io.IOException;\r\nimport java.io.InvalidObjectException;\r\nimport java.io.ObjectInputStream;\r\nimport java.io.Serializable;\r\nimport java.lang.reflect.ParameterizedType;\r\nimport java.lang.reflect.Type;\r\nimport java.util.*;\r\nimport java.util.function.BiConsumer;\r\nimport java.util.function.BiFunction;\r\nimport java.util.function.Consumer;\r\nimport java.util.function.Function;\r\nimport sun.misc.SharedSecrets;\r\n\r\n\/**\r\n * Hash table based implementation of the &lt;tt>Map&lt;\/tt> interface.  This\r\n * implementation provides all of the optional map operations, and permits\r\n * &lt;tt>null&lt;\/tt> values and the &lt;tt>null&lt;\/tt> key.  (The &lt;tt>HashMap&lt;\/tt>\r\n * class is roughly equivalent to &lt;tt>Hashtable&lt;\/tt>, except that it is\r\n * unsynchronized and permits nulls.)  This class makes no guarantees as to\r\n * the order of the map; in particular, it does not guarantee that the order\r\n * will remain constant over time.\r\n *\r\n * &lt;p>This implementation provides constant-time performance for the basic\r\n * operations (&lt;tt>get&lt;\/tt> and &lt;tt>put&lt;\/tt>), assuming the hash function\r\n * disperses the elements properly among the buckets.  Iteration over\r\n * collection views requires time proportional to the \"capacity\" of the\r\n * &lt;tt>HashMap&lt;\/tt> instance (the number of buckets) plus its size (the number\r\n * of key-value mappings).  Thus, it's very important not to set the initial\r\n * capacity too high (or the load factor too low) if iteration performance is\r\n * important.\r\n *\r\n * &lt;p>An instance of &lt;tt>HashMap&lt;\/tt> has two parameters that affect its\r\n * performance: &lt;i>initial capacity&lt;\/i> and &lt;i>load factor&lt;\/i>.  The\r\n * &lt;i>capacity&lt;\/i> is the number of buckets in the hash table, and the initial\r\n * capacity is simply the capacity at the time the hash table is created.  The\r\n * &lt;i>load factor&lt;\/i> is a measure of how full the hash table is allowed to\r\n * get before its capacity is automatically increased.  When the number of\r\n * entries in the hash table exceeds the product of the load factor and the\r\n * current capacity, the hash table is &lt;i>rehashed&lt;\/i> (that is, internal data\r\n * structures are rebuilt) so that the hash table has approximately twice the\r\n * number of buckets.\r\n *\r\n * &lt;p>As a general rule, the default load factor (.75) offers a good\r\n * tradeoff between time and space costs.  Higher values decrease the\r\n * space overhead but increase the lookup cost (reflected in most of\r\n * the operations of the &lt;tt>HashMap&lt;\/tt> class, including\r\n * &lt;tt>get&lt;\/tt> and &lt;tt>put&lt;\/tt>).  The expected number of entries in\r\n * the map and its load factor should be taken into account when\r\n * setting its initial capacity, so as to minimize the number of\r\n * rehash operations.  If the initial capacity is greater than the\r\n * maximum number of entries divided by the load factor, no rehash\r\n * operations will ever occur.\r\n *\r\n * &lt;p>If many mappings are to be stored in a &lt;tt>HashMap&lt;\/tt>\r\n * instance, creating it with a sufficiently large capacity will allow\r\n * the mappings to be stored more efficiently than letting it perform\r\n * automatic rehashing as needed to grow the table.  Note that using\r\n * many keys with the same {@code hashCode()} is a sure way to slow\r\n * down performance of any hash table. To ameliorate impact, when keys\r\n * are {@link Comparable}, this class may use comparison order among\r\n * keys to help break ties.\r\n *\r\n * &lt;p>&lt;strong>Note that this implementation is not synchronized.&lt;\/strong>\r\n * If multiple threads access a hash map concurrently, and at least one of\r\n * the threads modifies the map structurally, it &lt;i>must&lt;\/i> be\r\n * synchronized externally.  (A structural modification is any operation\r\n * that adds or deletes one or more mappings; merely changing the value\r\n * associated with a key that an instance already contains is not a\r\n * structural modification.)  This is typically accomplished by\r\n * synchronizing on some object that naturally encapsulates the map.\r\n *\r\n * If no such object exists, the map should be \"wrapped\" using the\r\n * {@link Collections#synchronizedMap Collections.synchronizedMap}\r\n * method.  This is best done at creation time, to prevent accidental\r\n * unsynchronized access to the map:&lt;pre>\r\n *   Map m = Collections.synchronizedMap(new HashMap(...));&lt;\/pre>\r\n *\r\n * &lt;p>The iterators returned by all of this class's \"collection view methods\"\r\n * are &lt;i>fail-fast&lt;\/i>: if the map is structurally modified at any time after\r\n * the iterator is created, in any way except through the iterator's own\r\n * &lt;tt>remove&lt;\/tt> method, the iterator will throw a\r\n * {@link ConcurrentModificationException}.  Thus, in the face of concurrent\r\n * modification, the iterator fails quickly and cleanly, rather than risking\r\n * arbitrary, non-deterministic behavior at an undetermined time in the\r\n * future.\r\n *\r\n * &lt;p>Note that the fail-fast behavior of an iterator cannot be guaranteed\r\n * as it is, generally speaking, impossible to make any hard guarantees in the\r\n * presence of unsynchronized concurrent modification.  Fail-fast iterators\r\n * throw &lt;tt>ConcurrentModificationException&lt;\/tt> on a best-effort basis.\r\n * Therefore, it would be wrong to write a program that depended on this\r\n * exception for its correctness: &lt;i>the fail-fast behavior of iterators\r\n * should be used only to detect bugs.&lt;\/i>\r\n *\r\n * &lt;p>This class is a member of the\r\n * &lt;a href=\"{@docRoot}\/..\/technotes\/guides\/collections\/index.html\">\r\n * Java Collections Framework&lt;\/a>.\r\n *\r\n * @param &lt;K> the type of keys maintained by this map\r\n * @param &lt;V> the type of mapped values\r\n *\r\n * @author  Doug Lea\r\n * @author  Josh Bloch\r\n * @author  Arthur van Hoff\r\n * @author  Neal Gafter\r\n * @see     Object#hashCode()\r\n * @see     Collection\r\n * @see     Map\r\n * @see     TreeMap\r\n * @see     Hashtable\r\n * @since   1.2\r\n *\/\r\npublic class HashMap&lt;K,V> extends AbstractMap&lt;K,V>\r\n    implements Map&lt;K,V>, Cloneable, Serializable {\r\n\r\n    private static final long serialVersionUID = 362498820763181265L;\r\n\r\n    \/*\r\n     * Implementation notes.\r\n     *\r\n     * This map usually acts as a binned (bucketed) hash table, but\r\n     * when bins get too large, they are transformed into bins of\r\n     * TreeNodes, each structured similarly to those in\r\n     * java.util.TreeMap. Most methods try to use normal bins, but\r\n     * relay to TreeNode methods when applicable (simply by checking\r\n     * instanceof a node).  Bins of TreeNodes may be traversed and\r\n     * used like any others, but additionally support faster lookup\r\n     * when overpopulated. However, since the vast majority of bins in\r\n     * normal use are not overpopulated, checking for existence of\r\n     * tree bins may be delayed in the course of table methods.\r\n     *\r\n     * Tree bins (i.e., bins whose elements are all TreeNodes) are\r\n     * ordered primarily by hashCode, but in the case of ties, if two\r\n     * elements are of the same \"class C implements Comparable&lt;C>\",\r\n     * type then their compareTo method is used for ordering. (We\r\n     * conservatively check generic types via reflection to validate\r\n     * this -- see method comparableClassFor).  The added complexity\r\n     * of tree bins is worthwhile in providing worst-case O(log n)\r\n     * operations when keys either have distinct hashes or are\r\n     * orderable, Thus, performance degrades gracefully under\r\n     * accidental or malicious usages in which hashCode() methods\r\n     * return values that are poorly distributed, as well as those in\r\n     * which many keys share a hashCode, so long as they are also\r\n     * Comparable. (If neither of these apply, we may waste about a\r\n     * factor of two in time and space compared to taking no\r\n     * precautions. But the only known cases stem from poor user\r\n     * programming practices that are already so slow that this makes\r\n     * little difference.)\r\n     *\r\n     * Because TreeNodes are about twice the size of regular nodes, we\r\n     * use them only when bins contain enough nodes to warrant use\r\n     * (see TREEIFY_THRESHOLD). And when they become too small (due to\r\n     * removal or resizing) they are converted back to plain bins.  In\r\n     * usages with well-distributed user hashCodes, tree bins are\r\n     * rarely used.  Ideally, under random hashCodes, the frequency of\r\n     * nodes in bins follows a Poisson distribution\r\n     * (http:\/\/en.wikipedia.org\/wiki\/Poisson_distribution) with a\r\n     * parameter of about 0.5 on average for the default resizing\r\n     * threshold of 0.75, although with a large variance because of\r\n     * resizing granularity. Ignoring variance, the expected\r\n     * occurrences of list size k are (exp(-0.5) * pow(0.5, k) \/\r\n     * factorial(k)). The first values are:\r\n     *\r\n     * 0:    0.60653066\r\n     * 1:    0.30326533\r\n     * 2:    0.07581633\r\n     * 3:    0.01263606\r\n     * 4:    0.00157952\r\n     * 5:    0.00015795\r\n     * 6:    0.00001316\r\n     * 7:    0.00000094\r\n     * 8:    0.00000006\r\n     * more: less than 1 in ten million\r\n     *\r\n     * The root of a tree bin is normally its first node.  However,\r\n     * sometimes (currently only upon Iterator.remove), the root might\r\n     * be elsewhere, but can be recovered following parent links\r\n     * (method TreeNode.root()).\r\n     *\r\n     * All applicable internal methods accept a hash code as an\r\n     * argument (as normally supplied from a public method), allowing\r\n     * them to call each other without recomputing user hashCodes.\r\n     * Most internal methods also accept a \"tab\" argument, that is\r\n     * normally the current table, but may be a new or old one when\r\n     * resizing or converting.\r\n     *\r\n     * When bin lists are treeified, split, or untreeified, we keep\r\n     * them in the same relative access\/traversal order (i.e., field\r\n     * Node.next) to better preserve locality, and to slightly\r\n     * simplify handling of splits and traversals that invoke\r\n     * iterator.remove. When using comparators on insertion, to keep a\r\n     * total ordering (or as close as is required here) across\r\n     * rebalancings, we compare classes and identityHashCodes as\r\n     * tie-breakers.\r\n     *\r\n     * The use and transitions among plain vs tree modes is\r\n     * complicated by the existence of subclass LinkedHashMap. See\r\n     * below for hook methods defined to be invoked upon insertion,\r\n     * removal and access that allow LinkedHashMap internals to\r\n     * otherwise remain independent of these mechanics. (This also\r\n     * requires that a map instance be passed to some utility methods\r\n     * that may create new nodes.)\r\n     *\r\n     * The concurrent-programming-like SSA-based coding style helps\r\n     * avoid aliasing errors amid all of the twisty pointer operations.\r\n     *\/\r\n\r\n    \/**\r\n     * The default initial capacity - MUST be a power of two.\r\n     *\/\r\n    static final int DEFAULT_INITIAL_CAPACITY = 1 &lt;&lt; 4; \/\/ aka 16\r\n\r\n    \/**\r\n     * The maximum capacity, used if a higher value is implicitly specified\r\n     * by either of the constructors with arguments.\r\n     * MUST be a power of two &lt;= 1&lt;&lt;30.\r\n     *\/\r\n    static final int MAXIMUM_CAPACITY = 1 &lt;&lt; 30;\r\n\r\n    \/**\r\n     * The load factor used when none specified in constructor.\r\n     *\/\r\n    static final float DEFAULT_LOAD_FACTOR = 0.75f;\r\n\r\n    \/**\r\n     * The bin count threshold for using a tree rather than list for a\r\n     * bin.  Bins are converted to trees when adding an element to a\r\n     * bin with at least this many nodes. The value must be greater\r\n     * than 2 and should be at least 8 to mesh with assumptions in\r\n     * tree removal about conversion back to plain bins upon\r\n     * shrinkage.\r\n     *\/\r\n    static final int TREEIFY_THRESHOLD = 8;\r\n\r\n    \/**\r\n     * The bin count threshold for untreeifying a (split) bin during a\r\n     * resize operation. Should be less than TREEIFY_THRESHOLD, and at\r\n     * most 6 to mesh with shrinkage detection under removal.\r\n     *\/\r\n    static final int UNTREEIFY_THRESHOLD = 6;\r\n\r\n    \/**\r\n     * The smallest table capacity for which bins may be treeified.\r\n     * (Otherwise the table is resized if too many nodes in a bin.)\r\n     * Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts\r\n     * between resizing and treeification thresholds.\r\n     *\/\r\n    static final int MIN_TREEIFY_CAPACITY = 64;\r\n\r\n    \/**\r\n     * Basic hash bin node, used for most entries.  (See below for\r\n     * TreeNode subclass, and in LinkedHashMap for its Entry subclass.)\r\n     *\/\r\n    static class Node&lt;K,V> implements Entry&lt;K,V> {\r\n        final int hash;\r\n        final K key;\r\n        V value;\r\n        Node&lt;K,V> next;\r\n\r\n        Node(int hash, K key, V value, Node&lt;K,V> next) {\r\n            this.hash = hash;\r\n            this.key = key;\r\n            this.value = value;\r\n            this.next = next;\r\n        }\r\n\r\n        public final K getKey()        { return key; }\r\n        public final V getValue()      { return value; }\r\n        public final String toString() { return key + \"=\" + value; }\r\n\r\n        public final int hashCode() {\r\n            return Objects.hashCode(key) ^ Objects.hashCode(value);\r\n        }\r\n\r\n        public final V setValue(V newValue) {\r\n            V oldValue = value;\r\n            value = newValue;\r\n            return oldValue;\r\n        }\r\n\r\n        public final boolean equals(Object o) {\r\n            if (o == this)\r\n                return true;\r\n            if (o instanceof Map.Entry) {\r\n                Entry&lt;?,?> e = (Entry&lt;?,?>)o;\r\n                if (Objects.equals(key, e.getKey()) &amp;&amp;\r\n                    Objects.equals(value, e.getValue()))\r\n                    return true;\r\n            }\r\n            return false;\r\n        }\r\n    }\r\n\r\n    \/* ---------------- Static utilities -------------- *\/\r\n\r\n    \/**\r\n     * Computes key.hashCode() and spreads (XORs) higher bits of hash\r\n     * to lower.  Because the table uses power-of-two masking, sets of\r\n     * hashes that vary only in bits above the current mask will\r\n     * always collide. (Among known examples are sets of Float keys\r\n     * holding consecutive whole numbers in small tables.)  So we\r\n     * apply a transform that spreads the impact of higher bits\r\n     * downward. There is a tradeoff between speed, utility, and\r\n     * quality of bit-spreading. Because many common sets of hashes\r\n     * are already reasonably distributed (so don't benefit from\r\n     * spreading), and because we use trees to handle large sets of\r\n     * collisions in bins, we just XOR some shifted bits in the\r\n     * cheapest possible way to reduce systematic lossage, as well as\r\n     * to incorporate impact of the highest bits that would otherwise\r\n     * never be used in index calculations because of table bounds.\r\n     *\/\r\n    static final int hash(Object key) {\r\n        int h;\r\n        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);\r\n    }\r\n\r\n    \/**\r\n     * Returns x's Class if it is of the form \"class C implements\r\n     * Comparable&lt;C>\", else null.\r\n     *\/\r\n    static Class&lt;?> comparableClassFor(Object x) {\r\n        if (x instanceof Comparable) {\r\n            Class&lt;?> c; Type&#91;] ts, as; Type t; ParameterizedType p;\r\n            if ((c = x.getClass()) == String.class) \/\/ bypass checks\r\n                return c;\r\n            if ((ts = c.getGenericInterfaces()) != null) {\r\n                for (int i = 0; i &lt; ts.length; ++i) {\r\n                    if (((t = ts&#91;i]) instanceof ParameterizedType) &amp;&amp;\r\n                        ((p = (ParameterizedType)t).getRawType() ==\r\n                         Comparable.class) &amp;&amp;\r\n                        (as = p.getActualTypeArguments()) != null &amp;&amp;\r\n                        as.length == 1 &amp;&amp; as&#91;0] == c) \/\/ type arg is c\r\n                        return c;\r\n                }\r\n            }\r\n        }\r\n        return null;\r\n    }\r\n\r\n    \/**\r\n     * Returns k.compareTo(x) if x matches kc (k's screened comparable\r\n     * class), else 0.\r\n     *\/\r\n    @SuppressWarnings({\"rawtypes\",\"unchecked\"}) \/\/ for cast to Comparable\r\n    static int compareComparables(Class&lt;?> kc, Object k, Object x) {\r\n        return (x == null || x.getClass() != kc ? 0 :\r\n                ((Comparable)k).compareTo(x));\r\n    }\r\n\r\n    \/**\r\n     * Returns a power of two size for the given target capacity.\r\n     *\/\r\n    static final int tableSizeFor(int cap) {\r\n        int n = cap - 1;\r\n        n |= n >>> 1;\r\n        n |= n >>> 2;\r\n        n |= n >>> 4;\r\n        n |= n >>> 8;\r\n        n |= n >>> 16;\r\n        return (n &lt; 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;\r\n    }\r\n\r\n    \/* ---------------- Fields -------------- *\/\r\n\r\n    \/**\r\n     * The table, initialized on first use, and resized as\r\n     * necessary. When allocated, length is always a power of two.\r\n     * (We also tolerate length zero in some operations to allow\r\n     * bootstrapping mechanics that are currently not needed.)\r\n     *\/\r\n    transient Node&lt;K,V>&#91;] table;\r\n\r\n    \/**\r\n     * Holds cached entrySet(). Note that AbstractMap fields are used\r\n     * for keySet() and values().\r\n     *\/\r\n    transient Set&lt;Entry&lt;K,V>> entrySet;\r\n\r\n    \/**\r\n     * The number of key-value mappings contained in this map.\r\n     *\/\r\n    transient int size;\r\n\r\n    \/**\r\n     * The number of times this HashMap has been structurally modified\r\n     * Structural modifications are those that change the number of mappings in\r\n     * the HashMap or otherwise modify its internal structure (e.g.,\r\n     * rehash).  This field is used to make iterators on Collection-views of\r\n     * the HashMap fail-fast.  (See ConcurrentModificationException).\r\n     *\/\r\n    transient int modCount;\r\n\r\n    \/**\r\n     * The next size value at which to resize (capacity * load factor).\r\n     *\r\n     * @serial\r\n     *\/\r\n    \/\/ (The javadoc description is true upon serialization.\r\n    \/\/ Additionally, if the table array has not been allocated, this\r\n    \/\/ field holds the initial array capacity, or zero signifying\r\n    \/\/ DEFAULT_INITIAL_CAPACITY.)\r\n    int threshold;\r\n\r\n    \/**\r\n     * The load factor for the hash table.\r\n     *\r\n     * @serial\r\n     *\/\r\n    final float loadFactor;\r\n\r\n    \/* ---------------- Public operations -------------- *\/\r\n\r\n    \/**\r\n     * Constructs an empty &lt;tt>HashMap&lt;\/tt> with the specified initial\r\n     * capacity and load factor.\r\n     *\r\n     * @param  initialCapacity the initial capacity\r\n     * @param  loadFactor      the load factor\r\n     * @throws IllegalArgumentException if the initial capacity is negative\r\n     *         or the load factor is nonpositive\r\n     *\/\r\n    public HashMap(int initialCapacity, float loadFactor) {\r\n        if (initialCapacity &lt; 0)\r\n            throw new IllegalArgumentException(\"Illegal initial capacity: \" +\r\n                                               initialCapacity);\r\n        if (initialCapacity > MAXIMUM_CAPACITY)\r\n            initialCapacity = MAXIMUM_CAPACITY;\r\n        if (loadFactor &lt;= 0 || Float.isNaN(loadFactor))\r\n            throw new IllegalArgumentException(\"Illegal load factor: \" +\r\n                                               loadFactor);\r\n        this.loadFactor = loadFactor;\r\n        this.threshold = tableSizeFor(initialCapacity);\r\n    }\r\n\r\n    \/**\r\n     * Constructs an empty &lt;tt>HashMap&lt;\/tt> with the specified initial\r\n     * capacity and the default load factor (0.75).\r\n     *\r\n     * @param  initialCapacity the initial capacity.\r\n     * @throws IllegalArgumentException if the initial capacity is negative.\r\n     *\/\r\n    public HashMap(int initialCapacity) {\r\n        this(initialCapacity, DEFAULT_LOAD_FACTOR);\r\n    }\r\n\r\n    \/**\r\n     * Constructs an empty &lt;tt>HashMap&lt;\/tt> with the default initial capacity\r\n     * (16) and the default load factor (0.75).\r\n     *\/\r\n    public HashMap() {\r\n        this.loadFactor = DEFAULT_LOAD_FACTOR; \/\/ all other fields defaulted\r\n    }\r\n\r\n    \/**\r\n     * Constructs a new &lt;tt>HashMap&lt;\/tt> with the same mappings as the\r\n     * specified &lt;tt>Map&lt;\/tt>.  The &lt;tt>HashMap&lt;\/tt> is created with\r\n     * default load factor (0.75) and an initial capacity sufficient to\r\n     * hold the mappings in the specified &lt;tt>Map&lt;\/tt>.\r\n     *\r\n     * @param   m the map whose mappings are to be placed in this map\r\n     * @throws  NullPointerException if the specified map is null\r\n     *\/\r\n    public HashMap(Map&lt;? extends K, ? extends V> m) {\r\n        this.loadFactor = DEFAULT_LOAD_FACTOR;\r\n        putMapEntries(m, false);\r\n    }\r\n\r\n    \/**\r\n     * Implements Map.putAll and Map constructor.\r\n     *\r\n     * @param m the map\r\n     * @param evict false when initially constructing this map, else\r\n     * true (relayed to method afterNodeInsertion).\r\n     *\/\r\n    final void putMapEntries(Map&lt;? extends K, ? extends V> m, boolean evict) {\r\n        int s = m.size();\r\n        if (s > 0) {\r\n            if (table == null) { \/\/ pre-size\r\n                float ft = ((float)s \/ loadFactor) + 1.0F;\r\n                int t = ((ft &lt; (float)MAXIMUM_CAPACITY) ?\r\n                         (int)ft : MAXIMUM_CAPACITY);\r\n                if (t > threshold)\r\n                    threshold = tableSizeFor(t);\r\n            }\r\n            else if (s > threshold)\r\n                resize();\r\n            for (Entry&lt;? extends K, ? extends V> e : m.entrySet()) {\r\n                K key = e.getKey();\r\n                V value = e.getValue();\r\n                putVal(hash(key), key, value, false, evict);\r\n            }\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Returns the number of key-value mappings in this map.\r\n     *\r\n     * @return the number of key-value mappings in this map\r\n     *\/\r\n    public int size() {\r\n        return size;\r\n    }\r\n\r\n    \/**\r\n     * Returns &lt;tt>true&lt;\/tt> if this map contains no key-value mappings.\r\n     *\r\n     * @return &lt;tt>true&lt;\/tt> if this map contains no key-value mappings\r\n     *\/\r\n    public boolean isEmpty() {\r\n        return size == 0;\r\n    }\r\n\r\n    \/**\r\n     * Returns the value to which the specified key is mapped,\r\n     * or {@code null} if this map contains no mapping for the key.\r\n     *\r\n     * &lt;p>More formally, if this map contains a mapping from a key\r\n     * {@code k} to a value {@code v} such that {@code (key==null ? k==null :\r\n     * key.equals(k))}, then this method returns {@code v}; otherwise\r\n     * it returns {@code null}.  (There can be at most one such mapping.)\r\n     *\r\n     * &lt;p>A return value of {@code null} does not &lt;i>necessarily&lt;\/i>\r\n     * indicate that the map contains no mapping for the key; it's also\r\n     * possible that the map explicitly maps the key to {@code null}.\r\n     * The {@link #containsKey containsKey} operation may be used to\r\n     * distinguish these two cases.\r\n     *\r\n     * @see #put(Object, Object)\r\n     *\/\r\n    public V get(Object key) {\r\n        Node&lt;K,V> e;\r\n        return (e = getNode(hash(key), key)) == null ? null : e.value;\r\n    }\r\n\r\n    \/**\r\n     * Implements Map.get and related methods.\r\n     *\r\n     * @param hash hash for key\r\n     * @param key the key\r\n     * @return the node, or null if none\r\n     *\/\r\n    final Node&lt;K,V> getNode(int hash, Object key) {\r\n        Node&lt;K,V>&#91;] tab; Node&lt;K,V> first, e; int n; K k;\r\n        if ((tab = table) != null &amp;&amp; (n = tab.length) > 0 &amp;&amp;\r\n            (first = tab&#91;(n - 1) &amp; hash]) != null) {\r\n            if (first.hash == hash &amp;&amp; \/\/ always check first node\r\n                ((k = first.key) == key || (key != null &amp;&amp; key.equals(k))))\r\n                return first;\r\n            if ((e = first.next) != null) {\r\n                if (first instanceof TreeNode)\r\n                    return ((TreeNode&lt;K,V>)first).getTreeNode(hash, key);\r\n                do {\r\n                    if (e.hash == hash &amp;&amp;\r\n                        ((k = e.key) == key || (key != null &amp;&amp; key.equals(k))))\r\n                        return e;\r\n                } while ((e = e.next) != null);\r\n            }\r\n        }\r\n        return null;\r\n    }\r\n\r\n    \/**\r\n     * Returns &lt;tt>true&lt;\/tt> if this map contains a mapping for the\r\n     * specified key.\r\n     *\r\n     * @param   key   The key whose presence in this map is to be tested\r\n     * @return &lt;tt>true&lt;\/tt> if this map contains a mapping for the specified\r\n     * key.\r\n     *\/\r\n    public boolean containsKey(Object key) {\r\n        return getNode(hash(key), key) != null;\r\n    }\r\n\r\n    \/**\r\n     * Associates the specified value with the specified key in this map.\r\n     * If the map previously contained a mapping for the key, the old\r\n     * value is replaced.\r\n     *\r\n     * @param key key with which the specified value is to be associated\r\n     * @param value value to be associated with the specified key\r\n     * @return the previous value associated with &lt;tt>key&lt;\/tt>, or\r\n     *         &lt;tt>null&lt;\/tt> if there was no mapping for &lt;tt>key&lt;\/tt>.\r\n     *         (A &lt;tt>null&lt;\/tt> return can also indicate that the map\r\n     *         previously associated &lt;tt>null&lt;\/tt> with &lt;tt>key&lt;\/tt>.)\r\n     *\/\r\n    public V put(K key, V value) {\r\n        return putVal(hash(key), key, value, false, true);\r\n    }\r\n\r\n    \/**\r\n     * Implements Map.put and related methods.\r\n     *\r\n     * @param hash hash for key\r\n     * @param key the key\r\n     * @param value the value to put\r\n     * @param onlyIfAbsent if true, don't change existing value\r\n     * @param evict if false, the table is in creation mode.\r\n     * @return previous value, or null if none\r\n     *\/\r\n    final V putVal(int hash, K key, V value, boolean onlyIfAbsent,\r\n                   boolean evict) {\r\n        Node&lt;K,V>&#91;] tab; Node&lt;K,V> p; int n, i;\r\n        if ((tab = table) == null || (n = tab.length) == 0)\r\n            n = (tab = resize()).length;\r\n        if ((p = tab&#91;i = (n - 1) &amp; hash]) == null)\r\n            tab&#91;i] = newNode(hash, key, value, null);\r\n        else {\r\n            Node&lt;K,V> e; K k;\r\n            if (p.hash == hash &amp;&amp;\r\n                ((k = p.key) == key || (key != null &amp;&amp; key.equals(k))))\r\n                e = p;\r\n            else if (p instanceof TreeNode)\r\n                e = ((TreeNode&lt;K,V>)p).putTreeVal(this, tab, hash, key, value);\r\n            else {\r\n                for (int binCount = 0; ; ++binCount) {\r\n                    if ((e = p.next) == null) {\r\n                        p.next = newNode(hash, key, value, null);\r\n                        if (binCount >= TREEIFY_THRESHOLD - 1) \/\/ -1 for 1st\r\n                            treeifyBin(tab, hash);\r\n                        break;\r\n                    }\r\n                    if (e.hash == hash &amp;&amp;\r\n                        ((k = e.key) == key || (key != null &amp;&amp; key.equals(k))))\r\n                        break;\r\n                    p = e;\r\n                }\r\n            }\r\n            if (e != null) { \/\/ existing mapping for key\r\n                V oldValue = e.value;\r\n                if (!onlyIfAbsent || oldValue == null)\r\n                    e.value = value;\r\n                afterNodeAccess(e);\r\n                return oldValue;\r\n            }\r\n        }\r\n        ++modCount;\r\n        if (++size > threshold)\r\n            resize();\r\n        afterNodeInsertion(evict);\r\n        return null;\r\n    }\r\n\r\n    \/**\r\n     * Initializes or doubles table size.  If null, allocates in\r\n     * accord with initial capacity target held in field threshold.\r\n     * Otherwise, because we are using power-of-two expansion, the\r\n     * elements from each bin must either stay at same index, or move\r\n     * with a power of two offset in the new table.\r\n     *\r\n     * @return the table\r\n     *\/\r\n    final Node&lt;K,V>&#91;] resize() {\r\n        Node&lt;K,V>&#91;] oldTab = table;\r\n        int oldCap = (oldTab == null) ? 0 : oldTab.length;\r\n        int oldThr = threshold;\r\n        int newCap, newThr = 0;\r\n        if (oldCap > 0) {\r\n            if (oldCap >= MAXIMUM_CAPACITY) {\r\n                threshold = Integer.MAX_VALUE;\r\n                return oldTab;\r\n            }\r\n            else if ((newCap = oldCap &lt;&lt; 1) &lt; MAXIMUM_CAPACITY &amp;&amp;\r\n                     oldCap >= DEFAULT_INITIAL_CAPACITY)\r\n                newThr = oldThr &lt;&lt; 1; \/\/ double threshold\r\n        }\r\n        else if (oldThr > 0) \/\/ initial capacity was placed in threshold\r\n            newCap = oldThr;\r\n        else {               \/\/ zero initial threshold signifies using defaults\r\n            newCap = DEFAULT_INITIAL_CAPACITY;\r\n            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);\r\n        }\r\n        if (newThr == 0) {\r\n            float ft = (float)newCap * loadFactor;\r\n            newThr = (newCap &lt; MAXIMUM_CAPACITY &amp;&amp; ft &lt; (float)MAXIMUM_CAPACITY ?\r\n                      (int)ft : Integer.MAX_VALUE);\r\n        }\r\n        threshold = newThr;\r\n        @SuppressWarnings({\"rawtypes\",\"unchecked\"})\r\n        Node&lt;K,V>&#91;] newTab = (Node&lt;K,V>&#91;])new Node&#91;newCap];\r\n        table = newTab;\r\n        if (oldTab != null) {\r\n            for (int j = 0; j &lt; oldCap; ++j) {\r\n                Node&lt;K,V> e;\r\n                if ((e = oldTab&#91;j]) != null) {\r\n                    oldTab&#91;j] = null;\r\n                    if (e.next == null)\r\n                        newTab&#91;e.hash &amp; (newCap - 1)] = e;\r\n                    else if (e instanceof TreeNode)\r\n                        ((TreeNode&lt;K,V>)e).split(this, newTab, j, oldCap);\r\n                    else { \/\/ preserve order\r\n                        Node&lt;K,V> loHead = null, loTail = null;\r\n                        Node&lt;K,V> hiHead = null, hiTail = null;\r\n                        Node&lt;K,V> next;\r\n                        do {\r\n                            next = e.next;\r\n                            if ((e.hash &amp; oldCap) == 0) {\r\n                                if (loTail == null)\r\n                                    loHead = e;\r\n                                else\r\n                                    loTail.next = e;\r\n                                loTail = e;\r\n                            }\r\n                            else {\r\n                                if (hiTail == null)\r\n                                    hiHead = e;\r\n                                else\r\n                                    hiTail.next = e;\r\n                                hiTail = e;\r\n                            }\r\n                        } while ((e = next) != null);\r\n                        if (loTail != null) {\r\n                            loTail.next = null;\r\n                            newTab&#91;j] = loHead;\r\n                        }\r\n                        if (hiTail != null) {\r\n                            hiTail.next = null;\r\n                            newTab&#91;j + oldCap] = hiHead;\r\n                        }\r\n                    }\r\n                }\r\n            }\r\n        }\r\n        return newTab;\r\n    }\r\n\r\n    \/**\r\n     * Replaces all linked nodes in bin at index for given hash unless\r\n     * table is too small, in which case resizes instead.\r\n     *\/\r\n    final void treeifyBin(Node&lt;K,V>&#91;] tab, int hash) {\r\n        int n, index; Node&lt;K,V> e;\r\n        if (tab == null || (n = tab.length) &lt; MIN_TREEIFY_CAPACITY)\r\n            resize();\r\n        else if ((e = tab&#91;index = (n - 1) &amp; hash]) != null) {\r\n            TreeNode&lt;K,V> hd = null, tl = null;\r\n            do {\r\n                TreeNode&lt;K,V> p = replacementTreeNode(e, null);\r\n                if (tl == null)\r\n                    hd = p;\r\n                else {\r\n                    p.prev = tl;\r\n                    tl.next = p;\r\n                }\r\n                tl = p;\r\n            } while ((e = e.next) != null);\r\n            if ((tab&#91;index] = hd) != null)\r\n                hd.treeify(tab);\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Copies all of the mappings from the specified map to this map.\r\n     * These mappings will replace any mappings that this map had for\r\n     * any of the keys currently in the specified map.\r\n     *\r\n     * @param m mappings to be stored in this map\r\n     * @throws NullPointerException if the specified map is null\r\n     *\/\r\n    public void putAll(Map&lt;? extends K, ? extends V> m) {\r\n        putMapEntries(m, true);\r\n    }\r\n\r\n    \/**\r\n     * Removes the mapping for the specified key from this map if present.\r\n     *\r\n     * @param  key key whose mapping is to be removed from the map\r\n     * @return the previous value associated with &lt;tt>key&lt;\/tt>, or\r\n     *         &lt;tt>null&lt;\/tt> if there was no mapping for &lt;tt>key&lt;\/tt>.\r\n     *         (A &lt;tt>null&lt;\/tt> return can also indicate that the map\r\n     *         previously associated &lt;tt>null&lt;\/tt> with &lt;tt>key&lt;\/tt>.)\r\n     *\/\r\n    public V remove(Object key) {\r\n        Node&lt;K,V> e;\r\n        return (e = removeNode(hash(key), key, null, false, true)) == null ?\r\n            null : e.value;\r\n    }\r\n\r\n    \/**\r\n     * Implements Map.remove and related methods.\r\n     *\r\n     * @param hash hash for key\r\n     * @param key the key\r\n     * @param value the value to match if matchValue, else ignored\r\n     * @param matchValue if true only remove if value is equal\r\n     * @param movable if false do not move other nodes while removing\r\n     * @return the node, or null if none\r\n     *\/\r\n    final Node&lt;K,V> removeNode(int hash, Object key, Object value,\r\n                               boolean matchValue, boolean movable) {\r\n        Node&lt;K,V>&#91;] tab; Node&lt;K,V> p; int n, index;\r\n        if ((tab = table) != null &amp;&amp; (n = tab.length) > 0 &amp;&amp;\r\n            (p = tab&#91;index = (n - 1) &amp; hash]) != null) {\r\n            Node&lt;K,V> node = null, e; K k; V v;\r\n            if (p.hash == hash &amp;&amp;\r\n                ((k = p.key) == key || (key != null &amp;&amp; key.equals(k))))\r\n                node = p;\r\n            else if ((e = p.next) != null) {\r\n                if (p instanceof TreeNode)\r\n                    node = ((TreeNode&lt;K,V>)p).getTreeNode(hash, key);\r\n                else {\r\n                    do {\r\n                        if (e.hash == hash &amp;&amp;\r\n                            ((k = e.key) == key ||\r\n                             (key != null &amp;&amp; key.equals(k)))) {\r\n                            node = e;\r\n                            break;\r\n                        }\r\n                        p = e;\r\n                    } while ((e = e.next) != null);\r\n                }\r\n            }\r\n            if (node != null &amp;&amp; (!matchValue || (v = node.value) == value ||\r\n                                 (value != null &amp;&amp; value.equals(v)))) {\r\n                if (node instanceof TreeNode)\r\n                    ((TreeNode&lt;K,V>)node).removeTreeNode(this, tab, movable);\r\n                else if (node == p)\r\n                    tab&#91;index] = node.next;\r\n                else\r\n                    p.next = node.next;\r\n                ++modCount;\r\n                --size;\r\n                afterNodeRemoval(node);\r\n                return node;\r\n            }\r\n        }\r\n        return null;\r\n    }\r\n\r\n    \/**\r\n     * Removes all of the mappings from this map.\r\n     * The map will be empty after this call returns.\r\n     *\/\r\n    public void clear() {\r\n        Node&lt;K,V>&#91;] tab;\r\n        modCount++;\r\n        if ((tab = table) != null &amp;&amp; size > 0) {\r\n            size = 0;\r\n            for (int i = 0; i &lt; tab.length; ++i)\r\n                tab&#91;i] = null;\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Returns &lt;tt>true&lt;\/tt> if this map maps one or more keys to the\r\n     * specified value.\r\n     *\r\n     * @param value value whose presence in this map is to be tested\r\n     * @return &lt;tt>true&lt;\/tt> if this map maps one or more keys to the\r\n     *         specified value\r\n     *\/\r\n    public boolean containsValue(Object value) {\r\n        Node&lt;K,V>&#91;] tab; V v;\r\n        if ((tab = table) != null &amp;&amp; size > 0) {\r\n            for (int i = 0; i &lt; tab.length; ++i) {\r\n                for (Node&lt;K,V> e = tab&#91;i]; e != null; e = e.next) {\r\n                    if ((v = e.value) == value ||\r\n                        (value != null &amp;&amp; value.equals(v)))\r\n                        return true;\r\n                }\r\n            }\r\n        }\r\n        return false;\r\n    }\r\n\r\n    \/**\r\n     * Returns a {@link Set} view of the keys contained in this map.\r\n     * The set is backed by the map, so changes to the map are\r\n     * reflected in the set, and vice-versa.  If the map is modified\r\n     * while an iteration over the set is in progress (except through\r\n     * the iterator's own &lt;tt>remove&lt;\/tt> operation), the results of\r\n     * the iteration are undefined.  The set supports element removal,\r\n     * which removes the corresponding mapping from the map, via the\r\n     * &lt;tt>Iterator.remove&lt;\/tt>, &lt;tt>Set.remove&lt;\/tt>,\r\n     * &lt;tt>removeAll&lt;\/tt>, &lt;tt>retainAll&lt;\/tt>, and &lt;tt>clear&lt;\/tt>\r\n     * operations.  It does not support the &lt;tt>add&lt;\/tt> or &lt;tt>addAll&lt;\/tt>\r\n     * operations.\r\n     *\r\n     * @return a set view of the keys contained in this map\r\n     *\/\r\n    public Set&lt;K> keySet() {\r\n        Set&lt;K> ks = keySet;\r\n        if (ks == null) {\r\n            ks = new KeySet();\r\n            keySet = ks;\r\n        }\r\n        return ks;\r\n    }\r\n\r\n    final class KeySet extends AbstractSet&lt;K> {\r\n        public final int size()                 { return size; }\r\n        public final void clear()               { HashMap.this.clear(); }\r\n        public final Iterator&lt;K> iterator()     { return new KeyIterator(); }\r\n        public final boolean contains(Object o) { return containsKey(o); }\r\n        public final boolean remove(Object key) {\r\n            return removeNode(hash(key), key, null, false, true) != null;\r\n        }\r\n        public final Spliterator&lt;K> spliterator() {\r\n            return new KeySpliterator&lt;>(HashMap.this, 0, -1, 0, 0);\r\n        }\r\n        public final void forEach(Consumer&lt;? super K> action) {\r\n            Node&lt;K,V>&#91;] tab;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            if (size > 0 &amp;&amp; (tab = table) != null) {\r\n                int mc = modCount;\r\n                for (int i = 0; i &lt; tab.length; ++i) {\r\n                    for (Node&lt;K,V> e = tab&#91;i]; e != null; e = e.next)\r\n                        action.accept(e.key);\r\n                }\r\n                if (modCount != mc)\r\n                    throw new ConcurrentModificationException();\r\n            }\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Returns a {@link Collection} view of the values contained in this map.\r\n     * The collection is backed by the map, so changes to the map are\r\n     * reflected in the collection, and vice-versa.  If the map is\r\n     * modified while an iteration over the collection is in progress\r\n     * (except through the iterator's own &lt;tt>remove&lt;\/tt> operation),\r\n     * the results of the iteration are undefined.  The collection\r\n     * supports element removal, which removes the corresponding\r\n     * mapping from the map, via the &lt;tt>Iterator.remove&lt;\/tt>,\r\n     * &lt;tt>Collection.remove&lt;\/tt>, &lt;tt>removeAll&lt;\/tt>,\r\n     * &lt;tt>retainAll&lt;\/tt> and &lt;tt>clear&lt;\/tt> operations.  It does not\r\n     * support the &lt;tt>add&lt;\/tt> or &lt;tt>addAll&lt;\/tt> operations.\r\n     *\r\n     * @return a view of the values contained in this map\r\n     *\/\r\n    public Collection&lt;V> values() {\r\n        Collection&lt;V> vs = values;\r\n        if (vs == null) {\r\n            vs = new Values();\r\n            values = vs;\r\n        }\r\n        return vs;\r\n    }\r\n\r\n    final class Values extends AbstractCollection&lt;V> {\r\n        public final int size()                 { return size; }\r\n        public final void clear()               { HashMap.this.clear(); }\r\n        public final Iterator&lt;V> iterator()     { return new ValueIterator(); }\r\n        public final boolean contains(Object o) { return containsValue(o); }\r\n        public final Spliterator&lt;V> spliterator() {\r\n            return new ValueSpliterator&lt;>(HashMap.this, 0, -1, 0, 0);\r\n        }\r\n        public final void forEach(Consumer&lt;? super V> action) {\r\n            Node&lt;K,V>&#91;] tab;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            if (size > 0 &amp;&amp; (tab = table) != null) {\r\n                int mc = modCount;\r\n                for (int i = 0; i &lt; tab.length; ++i) {\r\n                    for (Node&lt;K,V> e = tab&#91;i]; e != null; e = e.next)\r\n                        action.accept(e.value);\r\n                }\r\n                if (modCount != mc)\r\n                    throw new ConcurrentModificationException();\r\n            }\r\n        }\r\n    }\r\n\r\n    \/**\r\n     * Returns a {@link Set} view of the mappings contained in this map.\r\n     * The set is backed by the map, so changes to the map are\r\n     * reflected in the set, and vice-versa.  If the map is modified\r\n     * while an iteration over the set is in progress (except through\r\n     * the iterator's own &lt;tt>remove&lt;\/tt> operation, or through the\r\n     * &lt;tt>setValue&lt;\/tt> operation on a map entry returned by the\r\n     * iterator) the results of the iteration are undefined.  The set\r\n     * supports element removal, which removes the corresponding\r\n     * mapping from the map, via the &lt;tt>Iterator.remove&lt;\/tt>,\r\n     * &lt;tt>Set.remove&lt;\/tt>, &lt;tt>removeAll&lt;\/tt>, &lt;tt>retainAll&lt;\/tt> and\r\n     * &lt;tt>clear&lt;\/tt> operations.  It does not support the\r\n     * &lt;tt>add&lt;\/tt> or &lt;tt>addAll&lt;\/tt> operations.\r\n     *\r\n     * @return a set view of the mappings contained in this map\r\n     *\/\r\n    public Set&lt;Entry&lt;K,V>> entrySet() {\r\n        Set&lt;Entry&lt;K,V>> es;\r\n        return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;\r\n    }\r\n\r\n    final class EntrySet extends AbstractSet&lt;Entry&lt;K,V>> {\r\n        public final int size()                 { return size; }\r\n        public final void clear()               { HashMap.this.clear(); }\r\n        public final Iterator&lt;Entry&lt;K,V>> iterator() {\r\n            return new EntryIterator();\r\n        }\r\n        public final boolean contains(Object o) {\r\n            if (!(o instanceof Map.Entry))\r\n                return false;\r\n            Entry&lt;?,?> e = (Entry&lt;?,?>) o;\r\n            Object key = e.getKey();\r\n            Node&lt;K,V> candidate = getNode(hash(key), key);\r\n            return candidate != null &amp;&amp; candidate.equals(e);\r\n        }\r\n        public final boolean remove(Object o) {\r\n            if (o instanceof Map.Entry) {\r\n                Entry&lt;?,?> e = (Entry&lt;?,?>) o;\r\n                Object key = e.getKey();\r\n                Object value = e.getValue();\r\n                return removeNode(hash(key), key, value, true, true) != null;\r\n            }\r\n            return false;\r\n        }\r\n        public final Spliterator&lt;Entry&lt;K,V>> spliterator() {\r\n            return new EntrySpliterator&lt;>(HashMap.this, 0, -1, 0, 0);\r\n        }\r\n        public final void forEach(Consumer&lt;? super Entry&lt;K,V>> action) {\r\n            Node&lt;K,V>&#91;] tab;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            if (size > 0 &amp;&amp; (tab = table) != null) {\r\n                int mc = modCount;\r\n                for (int i = 0; i &lt; tab.length; ++i) {\r\n                    for (Node&lt;K,V> e = tab&#91;i]; e != null; e = e.next)\r\n                        action.accept(e);\r\n                }\r\n                if (modCount != mc)\r\n                    throw new ConcurrentModificationException();\r\n            }\r\n        }\r\n    }\r\n\r\n    \/\/ Overrides of JDK8 Map extension methods\r\n\r\n    @Override\r\n    public V getOrDefault(Object key, V defaultValue) {\r\n        Node&lt;K,V> e;\r\n        return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;\r\n    }\r\n\r\n    @Override\r\n    public V putIfAbsent(K key, V value) {\r\n        return putVal(hash(key), key, value, true, true);\r\n    }\r\n\r\n    @Override\r\n    public boolean remove(Object key, Object value) {\r\n        return removeNode(hash(key), key, value, true, true) != null;\r\n    }\r\n\r\n    @Override\r\n    public boolean replace(K key, V oldValue, V newValue) {\r\n        Node&lt;K,V> e; V v;\r\n        if ((e = getNode(hash(key), key)) != null &amp;&amp;\r\n            ((v = e.value) == oldValue || (v != null &amp;&amp; v.equals(oldValue)))) {\r\n            e.value = newValue;\r\n            afterNodeAccess(e);\r\n            return true;\r\n        }\r\n        return false;\r\n    }\r\n\r\n    @Override\r\n    public V replace(K key, V value) {\r\n        Node&lt;K,V> e;\r\n        if ((e = getNode(hash(key), key)) != null) {\r\n            V oldValue = e.value;\r\n            e.value = value;\r\n            afterNodeAccess(e);\r\n            return oldValue;\r\n        }\r\n        return null;\r\n    }\r\n\r\n    @Override\r\n    public V computeIfAbsent(K key,\r\n                             Function&lt;? super K, ? extends V> mappingFunction) {\r\n        if (mappingFunction == null)\r\n            throw new NullPointerException();\r\n        int hash = hash(key);\r\n        Node&lt;K,V>&#91;] tab; Node&lt;K,V> first; int n, i;\r\n        int binCount = 0;\r\n        TreeNode&lt;K,V> t = null;\r\n        Node&lt;K,V> old = null;\r\n        if (size > threshold || (tab = table) == null ||\r\n            (n = tab.length) == 0)\r\n            n = (tab = resize()).length;\r\n        if ((first = tab&#91;i = (n - 1) &amp; hash]) != null) {\r\n            if (first instanceof TreeNode)\r\n                old = (t = (TreeNode&lt;K,V>)first).getTreeNode(hash, key);\r\n            else {\r\n                Node&lt;K,V> e = first; K k;\r\n                do {\r\n                    if (e.hash == hash &amp;&amp;\r\n                        ((k = e.key) == key || (key != null &amp;&amp; key.equals(k)))) {\r\n                        old = e;\r\n                        break;\r\n                    }\r\n                    ++binCount;\r\n                } while ((e = e.next) != null);\r\n            }\r\n            V oldValue;\r\n            if (old != null &amp;&amp; (oldValue = old.value) != null) {\r\n                afterNodeAccess(old);\r\n                return oldValue;\r\n            }\r\n        }\r\n        V v = mappingFunction.apply(key);\r\n        if (v == null) {\r\n            return null;\r\n        } else if (old != null) {\r\n            old.value = v;\r\n            afterNodeAccess(old);\r\n            return v;\r\n        }\r\n        else if (t != null)\r\n            t.putTreeVal(this, tab, hash, key, v);\r\n        else {\r\n            tab&#91;i] = newNode(hash, key, v, first);\r\n            if (binCount >= TREEIFY_THRESHOLD - 1)\r\n                treeifyBin(tab, hash);\r\n        }\r\n        ++modCount;\r\n        ++size;\r\n        afterNodeInsertion(true);\r\n        return v;\r\n    }\r\n\r\n    public V computeIfPresent(K key,\r\n                              BiFunction&lt;? super K, ? super V, ? extends V> remappingFunction) {\r\n        if (remappingFunction == null)\r\n            throw new NullPointerException();\r\n        Node&lt;K,V> e; V oldValue;\r\n        int hash = hash(key);\r\n        if ((e = getNode(hash, key)) != null &amp;&amp;\r\n            (oldValue = e.value) != null) {\r\n            V v = remappingFunction.apply(key, oldValue);\r\n            if (v != null) {\r\n                e.value = v;\r\n                afterNodeAccess(e);\r\n                return v;\r\n            }\r\n            else\r\n                removeNode(hash, key, null, false, true);\r\n        }\r\n        return null;\r\n    }\r\n\r\n    @Override\r\n    public V compute(K key,\r\n                     BiFunction&lt;? super K, ? super V, ? extends V> remappingFunction) {\r\n        if (remappingFunction == null)\r\n            throw new NullPointerException();\r\n        int hash = hash(key);\r\n        Node&lt;K,V>&#91;] tab; Node&lt;K,V> first; int n, i;\r\n        int binCount = 0;\r\n        TreeNode&lt;K,V> t = null;\r\n        Node&lt;K,V> old = null;\r\n        if (size > threshold || (tab = table) == null ||\r\n            (n = tab.length) == 0)\r\n            n = (tab = resize()).length;\r\n        if ((first = tab&#91;i = (n - 1) &amp; hash]) != null) {\r\n            if (first instanceof TreeNode)\r\n                old = (t = (TreeNode&lt;K,V>)first).getTreeNode(hash, key);\r\n            else {\r\n                Node&lt;K,V> e = first; K k;\r\n                do {\r\n                    if (e.hash == hash &amp;&amp;\r\n                        ((k = e.key) == key || (key != null &amp;&amp; key.equals(k)))) {\r\n                        old = e;\r\n                        break;\r\n                    }\r\n                    ++binCount;\r\n                } while ((e = e.next) != null);\r\n            }\r\n        }\r\n        V oldValue = (old == null) ? null : old.value;\r\n        V v = remappingFunction.apply(key, oldValue);\r\n        if (old != null) {\r\n            if (v != null) {\r\n                old.value = v;\r\n                afterNodeAccess(old);\r\n            }\r\n            else\r\n                removeNode(hash, key, null, false, true);\r\n        }\r\n        else if (v != null) {\r\n            if (t != null)\r\n                t.putTreeVal(this, tab, hash, key, v);\r\n            else {\r\n                tab&#91;i] = newNode(hash, key, v, first);\r\n                if (binCount >= TREEIFY_THRESHOLD - 1)\r\n                    treeifyBin(tab, hash);\r\n            }\r\n            ++modCount;\r\n            ++size;\r\n            afterNodeInsertion(true);\r\n        }\r\n        return v;\r\n    }\r\n\r\n    @Override\r\n    public V merge(K key, V value,\r\n                   BiFunction&lt;? super V, ? super V, ? extends V> remappingFunction) {\r\n        if (value == null)\r\n            throw new NullPointerException();\r\n        if (remappingFunction == null)\r\n            throw new NullPointerException();\r\n        int hash = hash(key);\r\n        Node&lt;K,V>&#91;] tab; Node&lt;K,V> first; int n, i;\r\n        int binCount = 0;\r\n        TreeNode&lt;K,V> t = null;\r\n        Node&lt;K,V> old = null;\r\n        if (size > threshold || (tab = table) == null ||\r\n            (n = tab.length) == 0)\r\n            n = (tab = resize()).length;\r\n        if ((first = tab&#91;i = (n - 1) &amp; hash]) != null) {\r\n            if (first instanceof TreeNode)\r\n                old = (t = (TreeNode&lt;K,V>)first).getTreeNode(hash, key);\r\n            else {\r\n                Node&lt;K,V> e = first; K k;\r\n                do {\r\n                    if (e.hash == hash &amp;&amp;\r\n                        ((k = e.key) == key || (key != null &amp;&amp; key.equals(k)))) {\r\n                        old = e;\r\n                        break;\r\n                    }\r\n                    ++binCount;\r\n                } while ((e = e.next) != null);\r\n            }\r\n        }\r\n        if (old != null) {\r\n            V v;\r\n            if (old.value != null)\r\n                v = remappingFunction.apply(old.value, value);\r\n            else\r\n                v = value;\r\n            if (v != null) {\r\n                old.value = v;\r\n                afterNodeAccess(old);\r\n            }\r\n            else\r\n                removeNode(hash, key, null, false, true);\r\n            return v;\r\n        }\r\n        if (value != null) {\r\n            if (t != null)\r\n                t.putTreeVal(this, tab, hash, key, value);\r\n            else {\r\n                tab&#91;i] = newNode(hash, key, value, first);\r\n                if (binCount >= TREEIFY_THRESHOLD - 1)\r\n                    treeifyBin(tab, hash);\r\n            }\r\n            ++modCount;\r\n            ++size;\r\n            afterNodeInsertion(true);\r\n        }\r\n        return value;\r\n    }\r\n\r\n    @Override\r\n    public void forEach(BiConsumer&lt;? super K, ? super V> action) {\r\n        Node&lt;K,V>&#91;] tab;\r\n        if (action == null)\r\n            throw new NullPointerException();\r\n        if (size > 0 &amp;&amp; (tab = table) != null) {\r\n            int mc = modCount;\r\n            for (int i = 0; i &lt; tab.length; ++i) {\r\n                for (Node&lt;K,V> e = tab&#91;i]; e != null; e = e.next)\r\n                    action.accept(e.key, e.value);\r\n            }\r\n            if (modCount != mc)\r\n                throw new ConcurrentModificationException();\r\n        }\r\n    }\r\n\r\n    @Override\r\n    public void replaceAll(BiFunction&lt;? super K, ? super V, ? extends V> function) {\r\n        Node&lt;K,V>&#91;] tab;\r\n        if (function == null)\r\n            throw new NullPointerException();\r\n        if (size > 0 &amp;&amp; (tab = table) != null) {\r\n            int mc = modCount;\r\n            for (int i = 0; i &lt; tab.length; ++i) {\r\n                for (Node&lt;K,V> e = tab&#91;i]; e != null; e = e.next) {\r\n                    e.value = function.apply(e.key, e.value);\r\n                }\r\n            }\r\n            if (modCount != mc)\r\n                throw new ConcurrentModificationException();\r\n        }\r\n    }\r\n\r\n    \/* ------------------------------------------------------------ *\/\r\n    \/\/ Cloning and serialization\r\n\r\n    \/**\r\n     * Returns a shallow copy of this &lt;tt>HashMap&lt;\/tt> instance: the keys and\r\n     * values themselves are not cloned.\r\n     *\r\n     * @return a shallow copy of this map\r\n     *\/\r\n    @SuppressWarnings(\"unchecked\")\r\n    @Override\r\n    public Object clone() {\r\n        HashMap&lt;K,V> result;\r\n        try {\r\n            result = (HashMap&lt;K,V>)super.clone();\r\n        } catch (CloneNotSupportedException e) {\r\n            \/\/ this shouldn't happen, since we are Cloneable\r\n            throw new InternalError(e);\r\n        }\r\n        result.reinitialize();\r\n        result.putMapEntries(this, false);\r\n        return result;\r\n    }\r\n\r\n    \/\/ These methods are also used when serializing HashSets\r\n    final float loadFactor() { return loadFactor; }\r\n    final int capacity() {\r\n        return (table != null) ? table.length :\r\n            (threshold > 0) ? threshold :\r\n            DEFAULT_INITIAL_CAPACITY;\r\n    }\r\n\r\n    \/**\r\n     * Save the state of the &lt;tt>HashMap&lt;\/tt> instance to a stream (i.e.,\r\n     * serialize it).\r\n     *\r\n     * @serialData The &lt;i>capacity&lt;\/i> of the HashMap (the length of the\r\n     *             bucket array) is emitted (int), followed by the\r\n     *             &lt;i>size&lt;\/i> (an int, the number of key-value\r\n     *             mappings), followed by the key (Object) and value (Object)\r\n     *             for each key-value mapping.  The key-value mappings are\r\n     *             emitted in no particular order.\r\n     *\/\r\n    private void writeObject(java.io.ObjectOutputStream s)\r\n        throws IOException {\r\n        int buckets = capacity();\r\n        \/\/ Write out the threshold, loadfactor, and any hidden stuff\r\n        s.defaultWriteObject();\r\n        s.writeInt(buckets);\r\n        s.writeInt(size);\r\n        internalWriteEntries(s);\r\n    }\r\n\r\n    \/**\r\n     * Reconstitutes this map from a stream (that is, deserializes it).\r\n     * @param s the stream\r\n     * @throws ClassNotFoundException if the class of a serialized object\r\n     *         could not be found\r\n     * @throws IOException if an I\/O error occurs\r\n     *\/\r\n    private void readObject(ObjectInputStream s)\r\n        throws IOException, ClassNotFoundException {\r\n\r\n        ObjectInputStream.GetField fields = s.readFields();\r\n\r\n        \/\/ Read loadFactor (ignore threshold)\r\n        float lf = fields.get(\"loadFactor\", 0.75f);\r\n        if (lf &lt;= 0 || Float.isNaN(lf))\r\n            throw new InvalidObjectException(\"Illegal load factor: \" + lf);\r\n\r\n        lf = Math.min(Math.max(0.25f, lf), 4.0f);\r\n        UnsafeHolder.putLoadFactor(this, lf);\r\n\r\n        reinitialize();\r\n\r\n        s.readInt();                \/\/ Read and ignore number of buckets\r\n        int mappings = s.readInt(); \/\/ Read number of mappings (size)\r\n        if (mappings &lt; 0) {\r\n            throw new InvalidObjectException(\"Illegal mappings count: \" + mappings);\r\n        } else if (mappings == 0) {\r\n            \/\/ use defaults\r\n        } else if (mappings > 0) {\r\n            float fc = (float)mappings \/ lf + 1.0f;\r\n            int cap = ((fc &lt; DEFAULT_INITIAL_CAPACITY) ?\r\n                       DEFAULT_INITIAL_CAPACITY :\r\n                       (fc >= MAXIMUM_CAPACITY) ?\r\n                       MAXIMUM_CAPACITY :\r\n                       tableSizeFor((int)fc));\r\n            float ft = (float)cap * lf;\r\n            threshold = ((cap &lt; MAXIMUM_CAPACITY &amp;&amp; ft &lt; MAXIMUM_CAPACITY) ?\r\n                         (int)ft : Integer.MAX_VALUE);\r\n\r\n            \/\/ Check Map.Entry&#91;].class since it's the nearest public type to\r\n            \/\/ what we're actually creating.\r\n            SharedSecrets.getJavaOISAccess().checkArray(s, Entry&#91;].class, cap);\r\n            @SuppressWarnings({\"rawtypes\",\"unchecked\"})\r\n            Node&lt;K,V>&#91;] tab = (Node&lt;K,V>&#91;])new Node&#91;cap];\r\n            table = tab;\r\n\r\n            \/\/ Read the keys and values, and put the mappings in the HashMap\r\n            for (int i = 0; i &lt; mappings; i++) {\r\n                @SuppressWarnings(\"unchecked\")\r\n                    K key = (K) s.readObject();\r\n                @SuppressWarnings(\"unchecked\")\r\n                    V value = (V) s.readObject();\r\n                putVal(hash(key), key, value, false, false);\r\n            }\r\n        }\r\n    }\r\n\r\n    \/\/ Support for resetting final field during deserializing\r\n    private static final class UnsafeHolder {\r\n        private UnsafeHolder() { throw new InternalError(); }\r\n        private static final sun.misc.Unsafe unsafe\r\n                = sun.misc.Unsafe.getUnsafe();\r\n        private static final long LF_OFFSET;\r\n        static {\r\n            try {\r\n                LF_OFFSET = unsafe.objectFieldOffset(HashMap.class.getDeclaredField(\"loadFactor\"));\r\n            } catch (NoSuchFieldException e) {\r\n                throw new InternalError();\r\n            }\r\n        }\r\n        static void putLoadFactor(HashMap&lt;?, ?> map, float lf) {\r\n            unsafe.putFloat(map, LF_OFFSET, lf);\r\n        }\r\n    }\r\n\r\n    \/* ------------------------------------------------------------ *\/\r\n    \/\/ iterators\r\n\r\n    abstract class HashIterator {\r\n        Node&lt;K,V> next;        \/\/ next entry to return\r\n        Node&lt;K,V> current;     \/\/ current entry\r\n        int expectedModCount;  \/\/ for fast-fail\r\n        int index;             \/\/ current slot\r\n\r\n        HashIterator() {\r\n            expectedModCount = modCount;\r\n            Node&lt;K,V>&#91;] t = table;\r\n            current = next = null;\r\n            index = 0;\r\n            if (t != null &amp;&amp; size > 0) { \/\/ advance to first entry\r\n                do {} while (index &lt; t.length &amp;&amp; (next = t&#91;index++]) == null);\r\n            }\r\n        }\r\n\r\n        public final boolean hasNext() {\r\n            return next != null;\r\n        }\r\n\r\n        final Node&lt;K,V> nextNode() {\r\n            Node&lt;K,V>&#91;] t;\r\n            Node&lt;K,V> e = next;\r\n            if (modCount != expectedModCount)\r\n                throw new ConcurrentModificationException();\r\n            if (e == null)\r\n                throw new NoSuchElementException();\r\n            if ((next = (current = e).next) == null &amp;&amp; (t = table) != null) {\r\n                do {} while (index &lt; t.length &amp;&amp; (next = t&#91;index++]) == null);\r\n            }\r\n            return e;\r\n        }\r\n\r\n        public final void remove() {\r\n            Node&lt;K,V> p = current;\r\n            if (p == null)\r\n                throw new IllegalStateException();\r\n            if (modCount != expectedModCount)\r\n                throw new ConcurrentModificationException();\r\n            current = null;\r\n            K key = p.key;\r\n            removeNode(hash(key), key, null, false, false);\r\n            expectedModCount = modCount;\r\n        }\r\n    }\r\n\r\n    final class KeyIterator extends HashIterator\r\n        implements Iterator&lt;K> {\r\n        public final K next() { return nextNode().key; }\r\n    }\r\n\r\n    final class ValueIterator extends HashIterator\r\n        implements Iterator&lt;V> {\r\n        public final V next() { return nextNode().value; }\r\n    }\r\n\r\n    final class EntryIterator extends HashIterator\r\n        implements Iterator&lt;Entry&lt;K,V>> {\r\n        public final Entry&lt;K,V> next() { return nextNode(); }\r\n    }\r\n\r\n    \/* ------------------------------------------------------------ *\/\r\n    \/\/ spliterators\r\n\r\n    static class HashMapSpliterator&lt;K,V> {\r\n        final HashMap&lt;K,V> map;\r\n        Node&lt;K,V> current;          \/\/ current node\r\n        int index;                  \/\/ current index, modified on advance\/split\r\n        int fence;                  \/\/ one past last index\r\n        int est;                    \/\/ size estimate\r\n        int expectedModCount;       \/\/ for comodification checks\r\n\r\n        HashMapSpliterator(HashMap&lt;K,V> m, int origin,\r\n                           int fence, int est,\r\n                           int expectedModCount) {\r\n            this.map = m;\r\n            this.index = origin;\r\n            this.fence = fence;\r\n            this.est = est;\r\n            this.expectedModCount = expectedModCount;\r\n        }\r\n\r\n        final int getFence() { \/\/ initialize fence and size on first use\r\n            int hi;\r\n            if ((hi = fence) &lt; 0) {\r\n                HashMap&lt;K,V> m = map;\r\n                est = m.size;\r\n                expectedModCount = m.modCount;\r\n                Node&lt;K,V>&#91;] tab = m.table;\r\n                hi = fence = (tab == null) ? 0 : tab.length;\r\n            }\r\n            return hi;\r\n        }\r\n\r\n        public final long estimateSize() {\r\n            getFence(); \/\/ force init\r\n            return (long) est;\r\n        }\r\n    }\r\n\r\n    static final class KeySpliterator&lt;K,V>\r\n        extends HashMapSpliterator&lt;K,V>\r\n        implements Spliterator&lt;K> {\r\n        KeySpliterator(HashMap&lt;K,V> m, int origin, int fence, int est,\r\n                       int expectedModCount) {\r\n            super(m, origin, fence, est, expectedModCount);\r\n        }\r\n\r\n        public KeySpliterator&lt;K,V> trySplit() {\r\n            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;\r\n            return (lo >= mid || current != null) ? null :\r\n                new KeySpliterator&lt;>(map, lo, index = mid, est >>>= 1,\r\n                                        expectedModCount);\r\n        }\r\n\r\n        public void forEachRemaining(Consumer&lt;? super K> action) {\r\n            int i, hi, mc;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            HashMap&lt;K,V> m = map;\r\n            Node&lt;K,V>&#91;] tab = m.table;\r\n            if ((hi = fence) &lt; 0) {\r\n                mc = expectedModCount = m.modCount;\r\n                hi = fence = (tab == null) ? 0 : tab.length;\r\n            }\r\n            else\r\n                mc = expectedModCount;\r\n            if (tab != null &amp;&amp; tab.length >= hi &amp;&amp;\r\n                (i = index) >= 0 &amp;&amp; (i &lt; (index = hi) || current != null)) {\r\n                Node&lt;K,V> p = current;\r\n                current = null;\r\n                do {\r\n                    if (p == null)\r\n                        p = tab&#91;i++];\r\n                    else {\r\n                        action.accept(p.key);\r\n                        p = p.next;\r\n                    }\r\n                } while (p != null || i &lt; hi);\r\n                if (m.modCount != mc)\r\n                    throw new ConcurrentModificationException();\r\n            }\r\n        }\r\n\r\n        public boolean tryAdvance(Consumer&lt;? super K> action) {\r\n            int hi;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            Node&lt;K,V>&#91;] tab = map.table;\r\n            if (tab != null &amp;&amp; tab.length >= (hi = getFence()) &amp;&amp; index >= 0) {\r\n                while (current != null || index &lt; hi) {\r\n                    if (current == null)\r\n                        current = tab&#91;index++];\r\n                    else {\r\n                        K k = current.key;\r\n                        current = current.next;\r\n                        action.accept(k);\r\n                        if (map.modCount != expectedModCount)\r\n                            throw new ConcurrentModificationException();\r\n                        return true;\r\n                    }\r\n                }\r\n            }\r\n            return false;\r\n        }\r\n\r\n        public int characteristics() {\r\n            return (fence &lt; 0 || est == map.size ? Spliterator.SIZED : 0) |\r\n                Spliterator.DISTINCT;\r\n        }\r\n    }\r\n\r\n    static final class ValueSpliterator&lt;K,V>\r\n        extends HashMapSpliterator&lt;K,V>\r\n        implements Spliterator&lt;V> {\r\n        ValueSpliterator(HashMap&lt;K,V> m, int origin, int fence, int est,\r\n                         int expectedModCount) {\r\n            super(m, origin, fence, est, expectedModCount);\r\n        }\r\n\r\n        public ValueSpliterator&lt;K,V> trySplit() {\r\n            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;\r\n            return (lo >= mid || current != null) ? null :\r\n                new ValueSpliterator&lt;>(map, lo, index = mid, est >>>= 1,\r\n                                          expectedModCount);\r\n        }\r\n\r\n        public void forEachRemaining(Consumer&lt;? super V> action) {\r\n            int i, hi, mc;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            HashMap&lt;K,V> m = map;\r\n            Node&lt;K,V>&#91;] tab = m.table;\r\n            if ((hi = fence) &lt; 0) {\r\n                mc = expectedModCount = m.modCount;\r\n                hi = fence = (tab == null) ? 0 : tab.length;\r\n            }\r\n            else\r\n                mc = expectedModCount;\r\n            if (tab != null &amp;&amp; tab.length >= hi &amp;&amp;\r\n                (i = index) >= 0 &amp;&amp; (i &lt; (index = hi) || current != null)) {\r\n                Node&lt;K,V> p = current;\r\n                current = null;\r\n                do {\r\n                    if (p == null)\r\n                        p = tab&#91;i++];\r\n                    else {\r\n                        action.accept(p.value);\r\n                        p = p.next;\r\n                    }\r\n                } while (p != null || i &lt; hi);\r\n                if (m.modCount != mc)\r\n                    throw new ConcurrentModificationException();\r\n            }\r\n        }\r\n\r\n        public boolean tryAdvance(Consumer&lt;? super V> action) {\r\n            int hi;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            Node&lt;K,V>&#91;] tab = map.table;\r\n            if (tab != null &amp;&amp; tab.length >= (hi = getFence()) &amp;&amp; index >= 0) {\r\n                while (current != null || index &lt; hi) {\r\n                    if (current == null)\r\n                        current = tab&#91;index++];\r\n                    else {\r\n                        V v = current.value;\r\n                        current = current.next;\r\n                        action.accept(v);\r\n                        if (map.modCount != expectedModCount)\r\n                            throw new ConcurrentModificationException();\r\n                        return true;\r\n                    }\r\n                }\r\n            }\r\n            return false;\r\n        }\r\n\r\n        public int characteristics() {\r\n            return (fence &lt; 0 || est == map.size ? Spliterator.SIZED : 0);\r\n        }\r\n    }\r\n\r\n    static final class EntrySpliterator&lt;K,V>\r\n        extends HashMapSpliterator&lt;K,V>\r\n        implements Spliterator&lt;Entry&lt;K,V>> {\r\n        EntrySpliterator(HashMap&lt;K,V> m, int origin, int fence, int est,\r\n                         int expectedModCount) {\r\n            super(m, origin, fence, est, expectedModCount);\r\n        }\r\n\r\n        public EntrySpliterator&lt;K,V> trySplit() {\r\n            int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;\r\n            return (lo >= mid || current != null) ? null :\r\n                new EntrySpliterator&lt;>(map, lo, index = mid, est >>>= 1,\r\n                                          expectedModCount);\r\n        }\r\n\r\n        public void forEachRemaining(Consumer&lt;? super Entry&lt;K,V>> action) {\r\n            int i, hi, mc;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            HashMap&lt;K,V> m = map;\r\n            Node&lt;K,V>&#91;] tab = m.table;\r\n            if ((hi = fence) &lt; 0) {\r\n                mc = expectedModCount = m.modCount;\r\n                hi = fence = (tab == null) ? 0 : tab.length;\r\n            }\r\n            else\r\n                mc = expectedModCount;\r\n            if (tab != null &amp;&amp; tab.length >= hi &amp;&amp;\r\n                (i = index) >= 0 &amp;&amp; (i &lt; (index = hi) || current != null)) {\r\n                Node&lt;K,V> p = current;\r\n                current = null;\r\n                do {\r\n                    if (p == null)\r\n                        p = tab&#91;i++];\r\n                    else {\r\n                        action.accept(p);\r\n                        p = p.next;\r\n                    }\r\n                } while (p != null || i &lt; hi);\r\n                if (m.modCount != mc)\r\n                    throw new ConcurrentModificationException();\r\n            }\r\n        }\r\n\r\n        public boolean tryAdvance(Consumer&lt;? super Entry&lt;K,V>> action) {\r\n            int hi;\r\n            if (action == null)\r\n                throw new NullPointerException();\r\n            Node&lt;K,V>&#91;] tab = map.table;\r\n            if (tab != null &amp;&amp; tab.length >= (hi = getFence()) &amp;&amp; index >= 0) {\r\n                while (current != null || index &lt; hi) {\r\n                    if (current == null)\r\n                        current = tab&#91;index++];\r\n                    else {\r\n                        Node&lt;K,V> e = current;\r\n                        current = current.next;\r\n                        action.accept(e);\r\n                        if (map.modCount != expectedModCount)\r\n                            throw new ConcurrentModificationException();\r\n                        return true;\r\n                    }\r\n                }\r\n            }\r\n            return false;\r\n        }\r\n\r\n        public int characteristics() {\r\n            return (fence &lt; 0 || est == map.size ? Spliterator.SIZED : 0) |\r\n                Spliterator.DISTINCT;\r\n        }\r\n    }\r\n\r\n    \/* ------------------------------------------------------------ *\/\r\n    \/\/ LinkedHashMap support\r\n\r\n\r\n    \/*\r\n     * The following package-protected methods are designed to be\r\n     * overridden by LinkedHashMap, but not by any other subclass.\r\n     * Nearly all other internal methods are also package-protected\r\n     * but are declared final, so can be used by LinkedHashMap, view\r\n     * classes, and HashSet.\r\n     *\/\r\n\r\n    \/\/ Create a regular (non-tree) node\r\n    Node&lt;K,V> newNode(int hash, K key, V value, Node&lt;K,V> next) {\r\n        return new Node&lt;>(hash, key, value, next);\r\n    }\r\n\r\n    \/\/ For conversion from TreeNodes to plain nodes\r\n    Node&lt;K,V> replacementNode(Node&lt;K,V> p, Node&lt;K,V> next) {\r\n        return new Node&lt;>(p.hash, p.key, p.value, next);\r\n    }\r\n\r\n    \/\/ Create a tree bin node\r\n    TreeNode&lt;K,V> newTreeNode(int hash, K key, V value, Node&lt;K,V> next) {\r\n        return new TreeNode&lt;>(hash, key, value, next);\r\n    }\r\n\r\n    \/\/ For treeifyBin\r\n    TreeNode&lt;K,V> replacementTreeNode(Node&lt;K,V> p, Node&lt;K,V> next) {\r\n        return new TreeNode&lt;>(p.hash, p.key, p.value, next);\r\n    }\r\n\r\n    \/**\r\n     * Reset to initial default state.  Called by clone and readObject.\r\n     *\/\r\n    void reinitialize() {\r\n        table = null;\r\n        entrySet = null;\r\n        keySet = null;\r\n        values = null;\r\n        modCount = 0;\r\n        threshold = 0;\r\n        size = 0;\r\n    }\r\n\r\n    \/\/ Callbacks to allow LinkedHashMap post-actions\r\n    void afterNodeAccess(Node&lt;K,V> p) { }\r\n    void afterNodeInsertion(boolean evict) { }\r\n    void afterNodeRemoval(Node&lt;K,V> p) { }\r\n\r\n    \/\/ Called only from writeObject, to ensure compatible ordering.\r\n    void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {\r\n        Node&lt;K,V>&#91;] tab;\r\n        if (size > 0 &amp;&amp; (tab = table) != null) {\r\n            for (int i = 0; i &lt; tab.length; ++i) {\r\n                for (Node&lt;K,V> e = tab&#91;i]; e != null; e = e.next) {\r\n                    s.writeObject(e.key);\r\n                    s.writeObject(e.value);\r\n                }\r\n            }\r\n        }\r\n    }\r\n\r\n    \/* ------------------------------------------------------------ *\/\r\n    \/\/ Tree bins\r\n\r\n    \/**\r\n     * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn\r\n     * extends Node) so can be used as extension of either regular or\r\n     * linked node.\r\n     *\/\r\n    static final class TreeNode&lt;K,V> extends LinkedHashMap.Entry&lt;K,V> {\r\n        TreeNode&lt;K,V> parent;  \/\/ red-black tree links\r\n        TreeNode&lt;K,V> left;\r\n        TreeNode&lt;K,V> right;\r\n        TreeNode&lt;K,V> prev;    \/\/ needed to unlink next upon deletion\r\n        boolean red;\r\n        TreeNode(int hash, K key, V val, Node&lt;K,V> next) {\r\n            super(hash, key, val, next);\r\n        }\r\n\r\n        \/**\r\n         * Returns root of tree containing this node.\r\n         *\/\r\n        final TreeNode&lt;K,V> root() {\r\n            for (TreeNode&lt;K,V> r = this, p;;) {\r\n                if ((p = r.parent) == null)\r\n                    return r;\r\n                r = p;\r\n            }\r\n        }\r\n\r\n        \/**\r\n         * Ensures that the given root is the first node of its bin.\r\n         *\/\r\n        static &lt;K,V> void moveRootToFront(Node&lt;K,V>&#91;] tab, TreeNode&lt;K,V> root) {\r\n            int n;\r\n            if (root != null &amp;&amp; tab != null &amp;&amp; (n = tab.length) > 0) {\r\n                int index = (n - 1) &amp; root.hash;\r\n                TreeNode&lt;K,V> first = (TreeNode&lt;K,V>)tab&#91;index];\r\n                if (root != first) {\r\n                    Node&lt;K,V> rn;\r\n                    tab&#91;index] = root;\r\n                    TreeNode&lt;K,V> rp = root.prev;\r\n                    if ((rn = root.next) != null)\r\n                        ((TreeNode&lt;K,V>)rn).prev = rp;\r\n                    if (rp != null)\r\n                        rp.next = rn;\r\n                    if (first != null)\r\n                        first.prev = root;\r\n                    root.next = first;\r\n                    root.prev = null;\r\n                }\r\n                assert checkInvariants(root);\r\n            }\r\n        }\r\n\r\n        \/**\r\n         * Finds the node starting at root p with the given hash and key.\r\n         * The kc argument caches comparableClassFor(key) upon first use\r\n         * comparing keys.\r\n         *\/\r\n        final TreeNode&lt;K,V> find(int h, Object k, Class&lt;?> kc) {\r\n            TreeNode&lt;K,V> p = this;\r\n            do {\r\n                int ph, dir; K pk;\r\n                TreeNode&lt;K,V> pl = p.left, pr = p.right, q;\r\n                if ((ph = p.hash) > h)\r\n                    p = pl;\r\n                else if (ph &lt; h)\r\n                    p = pr;\r\n                else if ((pk = p.key) == k || (k != null &amp;&amp; k.equals(pk)))\r\n                    return p;\r\n                else if (pl == null)\r\n                    p = pr;\r\n                else if (pr == null)\r\n                    p = pl;\r\n                else if ((kc != null ||\r\n                          (kc = comparableClassFor(k)) != null) &amp;&amp;\r\n                         (dir = compareComparables(kc, k, pk)) != 0)\r\n                    p = (dir &lt; 0) ? pl : pr;\r\n                else if ((q = pr.find(h, k, kc)) != null)\r\n                    return q;\r\n                else\r\n                    p = pl;\r\n            } while (p != null);\r\n            return null;\r\n        }\r\n\r\n        \/**\r\n         * Calls find for root node.\r\n         *\/\r\n        final TreeNode&lt;K,V> getTreeNode(int h, Object k) {\r\n            return ((parent != null) ? root() : this).find(h, k, null);\r\n        }\r\n\r\n        \/**\r\n         * Tie-breaking utility for ordering insertions when equal\r\n         * hashCodes and non-comparable. We don't require a total\r\n         * order, just a consistent insertion rule to maintain\r\n         * equivalence across rebalancings. Tie-breaking further than\r\n         * necessary simplifies testing a bit.\r\n         *\/\r\n        static int tieBreakOrder(Object a, Object b) {\r\n            int d;\r\n            if (a == null || b == null ||\r\n                (d = a.getClass().getName().\r\n                 compareTo(b.getClass().getName())) == 0)\r\n                d = (System.identityHashCode(a) &lt;= System.identityHashCode(b) ?\r\n                     -1 : 1);\r\n            return d;\r\n        }\r\n\r\n        \/**\r\n         * Forms tree of the nodes linked from this node.\r\n         *\/\r\n        final void treeify(Node&lt;K,V>&#91;] tab) {\r\n            TreeNode&lt;K,V> root = null;\r\n            for (TreeNode&lt;K,V> x = this, next; x != null; x = next) {\r\n                next = (TreeNode&lt;K,V>)x.next;\r\n                x.left = x.right = null;\r\n                if (root == null) {\r\n                    x.parent = null;\r\n                    x.red = false;\r\n                    root = x;\r\n                }\r\n                else {\r\n                    K k = x.key;\r\n                    int h = x.hash;\r\n                    Class&lt;?> kc = null;\r\n                    for (TreeNode&lt;K,V> p = root;;) {\r\n                        int dir, ph;\r\n                        K pk = p.key;\r\n                        if ((ph = p.hash) > h)\r\n                            dir = -1;\r\n                        else if (ph &lt; h)\r\n                            dir = 1;\r\n                        else if ((kc == null &amp;&amp;\r\n                                  (kc = comparableClassFor(k)) == null) ||\r\n                                 (dir = compareComparables(kc, k, pk)) == 0)\r\n                            dir = tieBreakOrder(k, pk);\r\n\r\n                        TreeNode&lt;K,V> xp = p;\r\n                        if ((p = (dir &lt;= 0) ? p.left : p.right) == null) {\r\n                            x.parent = xp;\r\n                            if (dir &lt;= 0)\r\n                                xp.left = x;\r\n                            else\r\n                                xp.right = x;\r\n                            root = balanceInsertion(root, x);\r\n                            break;\r\n                        }\r\n                    }\r\n                }\r\n            }\r\n            moveRootToFront(tab, root);\r\n        }\r\n\r\n        \/**\r\n         * Returns a list of non-TreeNodes replacing those linked from\r\n         * this node.\r\n         *\/\r\n        final Node&lt;K,V> untreeify(HashMap&lt;K,V> map) {\r\n            Node&lt;K,V> hd = null, tl = null;\r\n            for (Node&lt;K,V> q = this; q != null; q = q.next) {\r\n                Node&lt;K,V> p = map.replacementNode(q, null);\r\n                if (tl == null)\r\n                    hd = p;\r\n                else\r\n                    tl.next = p;\r\n                tl = p;\r\n            }\r\n            return hd;\r\n        }\r\n\r\n        \/**\r\n         * Tree version of putVal.\r\n         *\/\r\n        final TreeNode&lt;K,V> putTreeVal(HashMap&lt;K,V> map, Node&lt;K,V>&#91;] tab,\r\n                                       int h, K k, V v) {\r\n            Class&lt;?> kc = null;\r\n            boolean searched = false;\r\n            TreeNode&lt;K,V> root = (parent != null) ? root() : this;\r\n            for (TreeNode&lt;K,V> p = root;;) {\r\n                int dir, ph; K pk;\r\n                if ((ph = p.hash) > h)\r\n                    dir = -1;\r\n                else if (ph &lt; h)\r\n                    dir = 1;\r\n                else if ((pk = p.key) == k || (k != null &amp;&amp; k.equals(pk)))\r\n                    return p;\r\n                else if ((kc == null &amp;&amp;\r\n                          (kc = comparableClassFor(k)) == null) ||\r\n                         (dir = compareComparables(kc, k, pk)) == 0) {\r\n                    if (!searched) {\r\n                        TreeNode&lt;K,V> q, ch;\r\n                        searched = true;\r\n                        if (((ch = p.left) != null &amp;&amp;\r\n                             (q = ch.find(h, k, kc)) != null) ||\r\n                            ((ch = p.right) != null &amp;&amp;\r\n                             (q = ch.find(h, k, kc)) != null))\r\n                            return q;\r\n                    }\r\n                    dir = tieBreakOrder(k, pk);\r\n                }\r\n\r\n                TreeNode&lt;K,V> xp = p;\r\n                if ((p = (dir &lt;= 0) ? p.left : p.right) == null) {\r\n                    Node&lt;K,V> xpn = xp.next;\r\n                    TreeNode&lt;K,V> x = map.newTreeNode(h, k, v, xpn);\r\n                    if (dir &lt;= 0)\r\n                        xp.left = x;\r\n                    else\r\n                        xp.right = x;\r\n                    xp.next = x;\r\n                    x.parent = x.prev = xp;\r\n                    if (xpn != null)\r\n                        ((TreeNode&lt;K,V>)xpn).prev = x;\r\n                    moveRootToFront(tab, balanceInsertion(root, x));\r\n                    return null;\r\n                }\r\n            }\r\n        }\r\n\r\n        \/**\r\n         * Removes the given node, that must be present before this call.\r\n         * This is messier than typical red-black deletion code because we\r\n         * cannot swap the contents of an interior node with a leaf\r\n         * successor that is pinned by \"next\" pointers that are accessible\r\n         * independently during traversal. So instead we swap the tree\r\n         * linkages. If the current tree appears to have too few nodes,\r\n         * the bin is converted back to a plain bin. (The test triggers\r\n         * somewhere between 2 and 6 nodes, depending on tree structure).\r\n         *\/\r\n        final void removeTreeNode(HashMap&lt;K,V> map, Node&lt;K,V>&#91;] tab,\r\n                                  boolean movable) {\r\n            int n;\r\n            if (tab == null || (n = tab.length) == 0)\r\n                return;\r\n            int index = (n - 1) &amp; hash;\r\n            TreeNode&lt;K,V> first = (TreeNode&lt;K,V>)tab&#91;index], root = first, rl;\r\n            TreeNode&lt;K,V> succ = (TreeNode&lt;K,V>)next, pred = prev;\r\n            if (pred == null)\r\n                tab&#91;index] = first = succ;\r\n            else\r\n                pred.next = succ;\r\n            if (succ != null)\r\n                succ.prev = pred;\r\n            if (first == null)\r\n                return;\r\n            if (root.parent != null)\r\n                root = root.root();\r\n            if (root == null\r\n                || (movable\r\n                    &amp;&amp; (root.right == null\r\n                        || (rl = root.left) == null\r\n                        || rl.left == null))) {\r\n                tab&#91;index] = first.untreeify(map);  \/\/ too small\r\n                return;\r\n            }\r\n            TreeNode&lt;K,V> p = this, pl = left, pr = right, replacement;\r\n            if (pl != null &amp;&amp; pr != null) {\r\n                TreeNode&lt;K,V> s = pr, sl;\r\n                while ((sl = s.left) != null) \/\/ find successor\r\n                    s = sl;\r\n                boolean c = s.red; s.red = p.red; p.red = c; \/\/ swap colors\r\n                TreeNode&lt;K,V> sr = s.right;\r\n                TreeNode&lt;K,V> pp = p.parent;\r\n                if (s == pr) { \/\/ p was s's direct parent\r\n                    p.parent = s;\r\n                    s.right = p;\r\n                }\r\n                else {\r\n                    TreeNode&lt;K,V> sp = s.parent;\r\n                    if ((p.parent = sp) != null) {\r\n                        if (s == sp.left)\r\n                            sp.left = p;\r\n                        else\r\n                            sp.right = p;\r\n                    }\r\n                    if ((s.right = pr) != null)\r\n                        pr.parent = s;\r\n                }\r\n                p.left = null;\r\n                if ((p.right = sr) != null)\r\n                    sr.parent = p;\r\n                if ((s.left = pl) != null)\r\n                    pl.parent = s;\r\n                if ((s.parent = pp) == null)\r\n                    root = s;\r\n                else if (p == pp.left)\r\n                    pp.left = s;\r\n                else\r\n                    pp.right = s;\r\n                if (sr != null)\r\n                    replacement = sr;\r\n                else\r\n                    replacement = p;\r\n            }\r\n            else if (pl != null)\r\n                replacement = pl;\r\n            else if (pr != null)\r\n                replacement = pr;\r\n            else\r\n                replacement = p;\r\n            if (replacement != p) {\r\n                TreeNode&lt;K,V> pp = replacement.parent = p.parent;\r\n                if (pp == null)\r\n                    root = replacement;\r\n                else if (p == pp.left)\r\n                    pp.left = replacement;\r\n                else\r\n                    pp.right = replacement;\r\n                p.left = p.right = p.parent = null;\r\n            }\r\n\r\n            TreeNode&lt;K,V> r = p.red ? root : balanceDeletion(root, replacement);\r\n\r\n            if (replacement == p) {  \/\/ detach\r\n                TreeNode&lt;K,V> pp = p.parent;\r\n                p.parent = null;\r\n                if (pp != null) {\r\n                    if (p == pp.left)\r\n                        pp.left = null;\r\n                    else if (p == pp.right)\r\n                        pp.right = null;\r\n                }\r\n            }\r\n            if (movable)\r\n                moveRootToFront(tab, r);\r\n        }\r\n\r\n        \/**\r\n         * Splits nodes in a tree bin into lower and upper tree bins,\r\n         * or untreeifies if now too small. Called only from resize;\r\n         * see above discussion about split bits and indices.\r\n         *\r\n         * @param map the map\r\n         * @param tab the table for recording bin heads\r\n         * @param index the index of the table being split\r\n         * @param bit the bit of hash to split on\r\n         *\/\r\n        final void split(HashMap&lt;K,V> map, Node&lt;K,V>&#91;] tab, int index, int bit) {\r\n            TreeNode&lt;K,V> b = this;\r\n            \/\/ Relink into lo and hi lists, preserving order\r\n            TreeNode&lt;K,V> loHead = null, loTail = null;\r\n            TreeNode&lt;K,V> hiHead = null, hiTail = null;\r\n            int lc = 0, hc = 0;\r\n            for (TreeNode&lt;K,V> e = b, next; e != null; e = next) {\r\n                next = (TreeNode&lt;K,V>)e.next;\r\n                e.next = null;\r\n                if ((e.hash &amp; bit) == 0) {\r\n                    if ((e.prev = loTail) == null)\r\n                        loHead = e;\r\n                    else\r\n                        loTail.next = e;\r\n                    loTail = e;\r\n                    ++lc;\r\n                }\r\n                else {\r\n                    if ((e.prev = hiTail) == null)\r\n                        hiHead = e;\r\n                    else\r\n                        hiTail.next = e;\r\n                    hiTail = e;\r\n                    ++hc;\r\n                }\r\n            }\r\n\r\n            if (loHead != null) {\r\n                if (lc &lt;= UNTREEIFY_THRESHOLD)\r\n                    tab&#91;index] = loHead.untreeify(map);\r\n                else {\r\n                    tab&#91;index] = loHead;\r\n                    if (hiHead != null) \/\/ (else is already treeified)\r\n                        loHead.treeify(tab);\r\n                }\r\n            }\r\n            if (hiHead != null) {\r\n                if (hc &lt;= UNTREEIFY_THRESHOLD)\r\n                    tab&#91;index + bit] = hiHead.untreeify(map);\r\n                else {\r\n                    tab&#91;index + bit] = hiHead;\r\n                    if (loHead != null)\r\n                        hiHead.treeify(tab);\r\n                }\r\n            }\r\n        }\r\n\r\n        \/* ------------------------------------------------------------ *\/\r\n        \/\/ Red-black tree methods, all adapted from CLR\r\n\r\n        static &lt;K,V> TreeNode&lt;K,V> rotateLeft(TreeNode&lt;K,V> root,\r\n                                              TreeNode&lt;K,V> p) {\r\n            TreeNode&lt;K,V> r, pp, rl;\r\n            if (p != null &amp;&amp; (r = p.right) != null) {\r\n                if ((rl = p.right = r.left) != null)\r\n                    rl.parent = p;\r\n                if ((pp = r.parent = p.parent) == null)\r\n                    (root = r).red = false;\r\n                else if (pp.left == p)\r\n                    pp.left = r;\r\n                else\r\n                    pp.right = r;\r\n                r.left = p;\r\n                p.parent = r;\r\n            }\r\n            return root;\r\n        }\r\n\r\n        static &lt;K,V> TreeNode&lt;K,V> rotateRight(TreeNode&lt;K,V> root,\r\n                                               TreeNode&lt;K,V> p) {\r\n            TreeNode&lt;K,V> l, pp, lr;\r\n            if (p != null &amp;&amp; (l = p.left) != null) {\r\n                if ((lr = p.left = l.right) != null)\r\n                    lr.parent = p;\r\n                if ((pp = l.parent = p.parent) == null)\r\n                    (root = l).red = false;\r\n                else if (pp.right == p)\r\n                    pp.right = l;\r\n                else\r\n                    pp.left = l;\r\n                l.right = p;\r\n                p.parent = l;\r\n            }\r\n            return root;\r\n        }\r\n\r\n        static &lt;K,V> TreeNode&lt;K,V> balanceInsertion(TreeNode&lt;K,V> root,\r\n                                                    TreeNode&lt;K,V> x) {\r\n            x.red = true;\r\n            for (TreeNode&lt;K,V> xp, xpp, xppl, xppr;;) {\r\n                if ((xp = x.parent) == null) {\r\n                    x.red = false;\r\n                    return x;\r\n                }\r\n                else if (!xp.red || (xpp = xp.parent) == null)\r\n                    return root;\r\n                if (xp == (xppl = xpp.left)) {\r\n                    if ((xppr = xpp.right) != null &amp;&amp; xppr.red) {\r\n                        xppr.red = false;\r\n                        xp.red = false;\r\n                        xpp.red = true;\r\n                        x = xpp;\r\n                    }\r\n                    else {\r\n                        if (x == xp.right) {\r\n                            root = rotateLeft(root, x = xp);\r\n                            xpp = (xp = x.parent) == null ? null : xp.parent;\r\n                        }\r\n                        if (xp != null) {\r\n                            xp.red = false;\r\n                            if (xpp != null) {\r\n                                xpp.red = true;\r\n                                root = rotateRight(root, xpp);\r\n                            }\r\n                        }\r\n                    }\r\n                }\r\n                else {\r\n                    if (xppl != null &amp;&amp; xppl.red) {\r\n                        xppl.red = false;\r\n                        xp.red = false;\r\n                        xpp.red = true;\r\n                        x = xpp;\r\n                    }\r\n                    else {\r\n                        if (x == xp.left) {\r\n                            root = rotateRight(root, x = xp);\r\n                            xpp = (xp = x.parent) == null ? null : xp.parent;\r\n                        }\r\n                        if (xp != null) {\r\n                            xp.red = false;\r\n                            if (xpp != null) {\r\n                                xpp.red = true;\r\n                                root = rotateLeft(root, xpp);\r\n                            }\r\n                        }\r\n                    }\r\n                }\r\n            }\r\n        }\r\n\r\n        static &lt;K,V> TreeNode&lt;K,V> balanceDeletion(TreeNode&lt;K,V> root,\r\n                                                   TreeNode&lt;K,V> x) {\r\n            for (TreeNode&lt;K,V> xp, xpl, xpr;;) {\r\n                if (x == null || x == root)\r\n                    return root;\r\n                else if ((xp = x.parent) == null) {\r\n                    x.red = false;\r\n                    return x;\r\n                }\r\n                else if (x.red) {\r\n                    x.red = false;\r\n                    return root;\r\n                }\r\n                else if ((xpl = xp.left) == x) {\r\n                    if ((xpr = xp.right) != null &amp;&amp; xpr.red) {\r\n                        xpr.red = false;\r\n                        xp.red = true;\r\n                        root = rotateLeft(root, xp);\r\n                        xpr = (xp = x.parent) == null ? null : xp.right;\r\n                    }\r\n                    if (xpr == null)\r\n                        x = xp;\r\n                    else {\r\n                        TreeNode&lt;K,V> sl = xpr.left, sr = xpr.right;\r\n                        if ((sr == null || !sr.red) &amp;&amp;\r\n                            (sl == null || !sl.red)) {\r\n                            xpr.red = true;\r\n                            x = xp;\r\n                        }\r\n                        else {\r\n                            if (sr == null || !sr.red) {\r\n                                if (sl != null)\r\n                                    sl.red = false;\r\n                                xpr.red = true;\r\n                                root = rotateRight(root, xpr);\r\n                                xpr = (xp = x.parent) == null ?\r\n                                    null : xp.right;\r\n                            }\r\n                            if (xpr != null) {\r\n                                xpr.red = (xp == null) ? false : xp.red;\r\n                                if ((sr = xpr.right) != null)\r\n                                    sr.red = false;\r\n                            }\r\n                            if (xp != null) {\r\n                                xp.red = false;\r\n                                root = rotateLeft(root, xp);\r\n                            }\r\n                            x = root;\r\n                        }\r\n                    }\r\n                }\r\n                else { \/\/ symmetric\r\n                    if (xpl != null &amp;&amp; xpl.red) {\r\n                        xpl.red = false;\r\n                        xp.red = true;\r\n                        root = rotateRight(root, xp);\r\n                        xpl = (xp = x.parent) == null ? null : xp.left;\r\n                    }\r\n                    if (xpl == null)\r\n                        x = xp;\r\n                    else {\r\n                        TreeNode&lt;K,V> sl = xpl.left, sr = xpl.right;\r\n                        if ((sl == null || !sl.red) &amp;&amp;\r\n                            (sr == null || !sr.red)) {\r\n                            xpl.red = true;\r\n                            x = xp;\r\n                        }\r\n                        else {\r\n                            if (sl == null || !sl.red) {\r\n                                if (sr != null)\r\n                                    sr.red = false;\r\n                                xpl.red = true;\r\n                                root = rotateLeft(root, xpl);\r\n                                xpl = (xp = x.parent) == null ?\r\n                                    null : xp.left;\r\n                            }\r\n                            if (xpl != null) {\r\n                                xpl.red = (xp == null) ? false : xp.red;\r\n                                if ((sl = xpl.left) != null)\r\n                                    sl.red = false;\r\n                            }\r\n                            if (xp != null) {\r\n                                xp.red = false;\r\n                                root = rotateRight(root, xp);\r\n                            }\r\n                            x = root;\r\n                        }\r\n                    }\r\n                }\r\n            }\r\n        }\r\n\r\n        \/**\r\n         * Recursive invariant check\r\n         *\/\r\n        static &lt;K,V> boolean checkInvariants(TreeNode&lt;K,V> t) {\r\n            TreeNode&lt;K,V> tp = t.parent, tl = t.left, tr = t.right,\r\n                tb = t.prev, tn = (TreeNode&lt;K,V>)t.next;\r\n            if (tb != null &amp;&amp; tb.next != t)\r\n                return false;\r\n            if (tn != null &amp;&amp; tn.prev != t)\r\n                return false;\r\n            if (tp != null &amp;&amp; t != tp.left &amp;&amp; t != tp.right)\r\n                return false;\r\n            if (tl != null &amp;&amp; (tl.parent != t || tl.hash > t.hash))\r\n                return false;\r\n            if (tr != null &amp;&amp; (tr.parent != t || tr.hash &lt; t.hash))\r\n                return false;\r\n            if (t.red &amp;&amp; tl != null &amp;&amp; tl.red &amp;&amp; tr != null &amp;&amp; tr.red)\r\n                return false;\r\n            if (tl != null &amp;&amp; !checkInvariants(tl))\r\n                return false;\r\n            if (tr != null &amp;&amp; !checkInvariants(tr))\r\n                return false;\r\n            return true;\r\n        }\r\n    }\r\n\r\n}\r\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>jdk 7 HashMap\u6e90\u7801 jdk 8 HashMap\u6e90\u7801<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[46],"class_list":["post-1230","post","type-post","status-publish","format-standard","hentry","category-java-code","tag-java"],"_links":{"self":[{"href":"https:\/\/qaqaq.top\/index.php?rest_route=\/wp\/v2\/posts\/1230"}],"collection":[{"href":"https:\/\/qaqaq.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/qaqaq.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/qaqaq.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/qaqaq.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1230"}],"version-history":[{"count":1,"href":"https:\/\/qaqaq.top\/index.php?rest_route=\/wp\/v2\/posts\/1230\/revisions"}],"predecessor-version":[{"id":1231,"href":"https:\/\/qaqaq.top\/index.php?rest_route=\/wp\/v2\/posts\/1230\/revisions\/1231"}],"wp:attachment":[{"href":"https:\/\/qaqaq.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1230"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/qaqaq.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1230"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/qaqaq.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1230"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}