分布式时间戳 确保雪花Id唯一性的关键点就在于机器码 。
Snowflake的机器码workerId默认由本机IP和进程Id计算得到,各取末尾5位移位后拼在一起。因此,局域网内不同机器,或者相同机器的不同进程,其机器码基本上不会相同。在多网段局域网,或者跨机房场景,机器码有一定可能性相同。机器码加入进程Id,避免某个应用在同一台服务器上多实例部署时得到相同workerId。如果要求绝对不相同的机器码,可通过手工设置唯一workerId的方式来实现。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 import java.util.concurrent.Executors;import java.util.concurrent.ScheduledExecutorService;import java.util.concurrent.ThreadFactory;import java.util.concurrent.TimeUnit;import java.util.concurrent.atomic.AtomicLong;public class SystemClock { private final int period; private final AtomicLong now; private static class InstanceHolder { private static final SystemClock INSTANCE = new SystemClock (1 ); } private SystemClock (int period) { this .period = period; this .now = new AtomicLong (System.currentTimeMillis()); scheduleClockUpdating(); } private static SystemClock instance () { return InstanceHolder.INSTANCE; } private void scheduleClockUpdating () { ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor(runnable -> { Thread thread = new Thread (runnable, "System Clock" ); thread.setDaemon(true ); return thread; }); final int initialDelay = period; scheduler.scheduleAtFixedRate(() -> { now.set(System.currentTimeMillis()); }, initialDelay, period, TimeUnit.MILLISECONDS); } private long currentTimeMillis () { return now.get(); } public static long now () { return instance().currentTimeMillis(); } public static void main (String[] args) { System.out.println(SystemClock.now()); System.out.println(System.currentTimeMillis()); while (true ) { SystemClock.now(); } } }
雪花算法 https://www.cnblogs.com/keatsCoder/p/12129279.html#_caption_0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 /** * Twitter_Snowflake<br> * SnowFlake的结构如下(每部分用-分开):<br> * 0 - 0000000000 0000000000 0000000000 0000000000 0 - 00000 - 00000 - 000000000000 <br> * 1位标识,由于long基本类型在Java中是带符号的,最高位是符号位,正数是0,负数是1,所以id一般是正数,最高位是0<br> * 41位时间截(毫秒级),注意,41位时间截不是存储当前时间的时间截,而是存储时间截的差值(当前时间截 - 开始时间截) * 得到的值),这里的的开始时间截,一般是我们的id生成器开始使用的时间,由我们程序来指定的(如下下面程序IdWorker类的startTime属性)。41位的时间截,可以使用69年,年T = (1L << 41) / (1000L * 60 * 60 * 24 * 365) = 69<br> * 10位的数据机器位,可以部署在1024个节点,包括5位datacenterId和5位workerId<br> * 12位序列,毫秒内的计数,12位的计数顺序号支持每个节点每毫秒(同一机器,同一时间截)产生4096个ID序号<br> * 加起来刚好64位,为一个Long型。<br> * SnowFlake的优点是,整体上按照时间自增排序,并且整个分布式系统内不会产生ID碰撞(由数据中心ID和机器ID作区分),并且效率较高,经测试,SnowFlake每秒能够产生26万ID左右。 */ public class SnowflakeDistributeId { // ==============================Fields=========================================== /** * 开始时间截 (2015-01-01) */ private final long twepoch = 1420041600000L; /** * 机器id所占的位数 */ private final long workerIdBits = 5L; /** * 数据标识id所占的位数 */ private final long datacenterIdBits = 5L; /** * 支持的最大机器id,结果是31 (这个移位算法可以很快的计算出几位二进制数所能表示的最大十进制数) */ private final long maxWorkerId = -1L ^ (-1L << workerIdBits); /** * 支持的最大数据标识id,结果是31 */ private final long maxDatacenterId = -1L ^ (-1L << datacenterIdBits); /** * 序列在id中占的位数 */ private final long sequenceBits = 12L; /** * 机器ID向左移12位 */ private final long workerIdShift = sequenceBits; /** * 数据标识id向左移17位(12+5) */ private final long datacenterIdShift = sequenceBits + workerIdBits; /** * 时间截向左移22位(5+5+12) */ private final long timestampLeftShift = sequenceBits + workerIdBits + datacenterIdBits; /** * 生成序列的掩码,这里为4095 (0b111111111111=0xfff=4095) */ private final long sequenceMask = -1L ^ (-1L << sequenceBits); /** * 工作机器ID(0~31) */ private long workerId; /** * 数据中心ID(0~31) */ private long datacenterId; /** * 毫秒内序列(0~4095) */ private long sequence = 0L; /** * 上次生成ID的时间截 */ private long lastTimestamp = -1L; //==============================Constructors===================================== /** * 构造函数 * * @param workerId 工作ID (0~31) * @param datacenterId 数据中心ID (0~31) */ public SnowflakeDistributeId(long workerId, long datacenterId) { if (workerId > maxWorkerId || workerId < 0) { throw new IllegalArgumentException(String.format("worker Id can't be greater than %d or less than 0", maxWorkerId)); } if (datacenterId > maxDatacenterId || datacenterId < 0) { throw new IllegalArgumentException(String.format("datacenter Id can't be greater than %d or less than 0", maxDatacenterId)); } this.workerId = workerId; this.datacenterId = datacenterId; } // ==============================Methods========================================== /** * 获得下一个ID (该方法是线程安全的) * * @return SnowflakeId */ public synchronized long nextId() { long timestamp = timeGen(); //如果当前时间小于上一次ID生成的时间戳,说明系统时钟回退过这个时候应当抛出异常 if (timestamp < lastTimestamp) { throw new RuntimeException( String.format("Clock moved backwards. Refusing to generate id for %d milliseconds", lastTimestamp - timestamp)); } //如果是同一时间生成的,则进行毫秒内序列 if (lastTimestamp == timestamp) { sequence = (sequence + 1) & sequenceMask; //毫秒内序列溢出 if (sequence == 0) { //阻塞到下一个毫秒,获得新的时间戳 timestamp = tilNextMillis(lastTimestamp); } } //时间戳改变,毫秒内序列重置 else { sequence = 0L; } //上次生成ID的时间截 lastTimestamp = timestamp; //移位并通过或运算拼到一起组成64位的ID return ((timestamp - twepoch) << timestampLeftShift) // | (datacenterId << datacenterIdShift) // | (workerId << workerIdShift) // | sequence; } /** * 阻塞到下一个毫秒,直到获得新的时间戳 * * @param lastTimestamp 上次生成ID的时间截 * @return 当前时间戳 */ protected long tilNextMillis(long lastTimestamp) { long timestamp = timeGen(); while (timestamp <= lastTimestamp) { timestamp = timeGen(); } return timestamp; } /** * 返回以毫秒为单位的当前时间 * * @return 当前时间(毫秒) */ protected long timeGen() { return System.currentTimeMillis(); } }
雪花算法-优 https://www.cnblogs.com/keatsCoder/p/12129279.html#_caption_0
nacos配置
1 2 3 4 snow-flake: data-center: 1 app-name: test
MachineIdConfig
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.beans.factory.annotation.Value;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import javax.annotation.PreDestroy;import java.net.InetAddress;import java.net.UnknownHostException;import java.util.Date;import java.util.Timer;import java.util.TimerTask;import java.util.concurrent.TimeUnit;@Configuration public class MachineIdConfig { private static final Logger logger = LoggerFactory.getLogger(MachineIdConfig.class); @Autowired private RedisService redisService; @Value("${snow-flake.data-center:1}") private Integer dataCenterId; @Value("${snow-flake.app-name:service_1}") private String APP_NAME; public static Integer machineId; private static String localIp; private String getIPAddress () throws UnknownHostException { InetAddress address = InetAddress.getLocalHost(); return address.getHostAddress(); } @Bean public SnowFlake initMachineId () throws Exception { localIp = getIPAddress(); Long ip_ = Long.parseLong(localIp.replaceAll("\\." , "" )); machineId = ip_.hashCode() % 32 ; createMachineId(); return new SnowFlake (machineId, dataCenterId); } @PreDestroy public void destroyMachineId () { redisService.del(APP_NAME + dataCenterId + machineId); } public Integer createMachineId () { try { logger.info("注册一个机器ID到Redis " + machineId + " IP:" + localIp); Boolean flag = registerMachine(machineId, localIp); if (flag) { updateExpTimeThread(); logger.info("Redis中端口没有冲突 " + machineId + " IP:" + localIp); return machineId; } if (!checkIfCanRegister()) { getRandomMachineId(); createMachineId(); } else { logger.warn("Redis中端口冲突了,使用 0-31 之间未占用的Id " + machineId + " IP:" + localIp); createMachineId(); } } catch (Exception e) { logger.error("Redis连接异常,不能正确注册雪花机器号 " + machineId + " IP:" + localIp, e); logger.warn("使用临时方案,获取 32 - 63 之间的随机数作为机器号,请及时检查Redis连接" ); getRandomMachineId(); return machineId; } return machineId; } private Boolean checkIfCanRegister () { Boolean flag = true ; for (int i = 0 ; i < 32 ; i++) { flag = redisService.hasKey(APP_NAME + dataCenterId + i); if (!flag) { machineId = i; break ; } } return !flag; } private void updateExpTimeThread () { new Timer (localIp).schedule(new TimerTask () { @Override public void run () { Boolean b = checkIsLocalIp(String.valueOf(machineId)); if (b) { logger.info("IP一致,更新超时时间 ip:{},machineId:{}, time:{}" , localIp, machineId, new Date ()); redisService.expire(APP_NAME + dataCenterId + machineId, 60 * 60 * 24 ); } else { logger.info("重新生成机器ID ip:{},machineId:{}, time:{}" , localIp, machineId, new Date ()); getRandomMachineId(); createMachineId(); SnowFlake.setWorkerId(machineId); logger.info("Timer->thread->name:{}" , Thread.currentThread().getName()); this .cancel(); } } }, 10 * 1000 , 1000 * 60 * 60 * 23 ); } public void getRandomMachineId () { machineId = (int ) (Math.random() * 31 ) + 31 ; } private Boolean checkIsLocalIp (String mechineId) { String ip = (String) redisService.get(APP_NAME + dataCenterId + mechineId); logger.info("checkIsLocalIp->ip:{}" , ip); return localIp.equals(ip); } private Boolean registerMachine (Integer machineId, String localIp) throws Exception { Boolean success = redisService.setIfAbsent(APP_NAME + dataCenterId + machineId, localIp, 1L , TimeUnit.DAYS); if (!Boolean.TRUE.equals(success)) { String value = (String) redisService.get(APP_NAME + dataCenterId + machineId); if (localIp.equals(value)) { redisService.expire(APP_NAME + dataCenterId + machineId, 60 * 60 * 24 ); return true ; } return false ; } return true ; } }
SnowFlake
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 import org.springframework.context.annotation.Configuration;@Configuration public class SnowFlake { private final long twepoch = 1710668460000L ; private final long workerIdBits = 6L ; private final long dataCenterIdBits = 4L ; private final long maxWorkerId = -1L ^ (-1L << workerIdBits); private final long maxDatacenterId = -1L ^ (-1L << dataCenterIdBits); private final long sequenceBits = 12L ; private final long workerIdShift = sequenceBits; private final long datacenterIdShift = sequenceBits + workerIdBits; private final long timestampLeftShift = sequenceBits + workerIdBits + dataCenterIdBits; private final long sequenceMask = -1L ^ (-1L << sequenceBits); private static long workerId; private long datacenterId; private long sequence = 0L ; private long lastTimestamp = -1L ; public SnowFlake (long workerId, long datacenterId) { if (workerId > maxWorkerId || workerId < 0 ) { throw new IllegalArgumentException (String.format("机器ID必须小于 %d 且大于 0" , maxWorkerId)); } if (datacenterId > maxDatacenterId || datacenterId < 0 ) { throw new IllegalArgumentException (String.format("工作组ID必须小于 %d 且大于 0" , maxDatacenterId)); } this .workerId = workerId; this .datacenterId = datacenterId; } public SnowFlake () { this .workerId = 0 ; this .datacenterId = 0 ; } public synchronized long nextId () { long timestamp = timeGen(); if (timestamp < lastTimestamp) { throw new RuntimeException ( String.format("Clock moved backwards. Refusing to generate id for %d milliseconds" , lastTimestamp - timestamp)); } if (lastTimestamp == timestamp) { sequence = (sequence + 1 ) & sequenceMask; if (sequence == 0 ) { timestamp = tilNextMillis(lastTimestamp); } } else { sequence = 0L ; } lastTimestamp = timestamp; return ((timestamp - twepoch) << timestampLeftShift) | (datacenterId << datacenterIdShift) | (workerId << workerIdShift) | sequence; } protected long tilNextMillis (long lastTimestamp) { long timestamp = timeGen(); while (timestamp <= lastTimestamp) { timestamp = timeGen(); } return timestamp; } protected long timeGen () { return System.currentTimeMillis(); } public long getWorkerId () { return workerId; } public static void setWorkerId (long workerId) { SnowFlake.workerId = workerId; } public long getDatacenterId () { return datacenterId; } public void setDatacenterId (long datacenterId) { this .datacenterId = datacenterId; } }
使用
1 2 3 4 @Autowired private SnowFlake snowFlake;long nextId = snowFlake.nextId();
伪随机数真随机数 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 private static final Random random; static { long seed = System.currentTimeMillis(); random = new Random (seed); } public static void main (String[] args) { int base = 100 ; List<Long> list1 = getList1(base); List<Long> list11 = getList1(base); List<Long> list2 = getList2(base); List<Long> list21 = getList2(base); List<Long> list3 = getList3(base); List<Long> list31 = getList3(base); System.out.println("ThreadLocalRandom每次使用初始化种子:" ); System.out.println(JSON.toJSONString(list1)); System.out.println(JSON.toJSONString(list11)); System.out.println("\nRandom自定义固定种子随机数:" ); System.out.println(JSON.toJSONString(list2)); System.out.println(JSON.toJSONString(list21)); System.out.println("\nRandom随机固定种子随机数:" ); System.out.println(JSON.toJSONString(list3)); System.out.println(JSON.toJSONString(list31)); } private static List<Long> getList1 (int base) { List<Long> list1 = Lists.newArrayList(); long randomUtilSum = 0 ; for (int i = 0 ; i < base; i++) { double v = RandomUtil.randomDouble(100 ); Long precisionLong = PrecisionCalculationUtil.getPrecisionLong(v); randomUtilSum += precisionLong; list1.add(precisionLong); } System.err.println(randomUtilSum / base); return list1; } private static List<Long> getList2 (int base) { List<Long> list2 = Lists.newArrayList(); long seed = 1653473810875L ; Random random = new Random (seed); long sum = 0 ; for (int i = 0 ; i < base; i++) { double r = random.nextDouble() * 100 ; Long precisionLong = PrecisionCalculationUtil.getPrecisionLong(r); sum += precisionLong; list2.add(precisionLong); } System.err.println(sum / base); return list2; } private static List<Long> getList3 (int base) { List<Long> list3 = Lists.newArrayList(); Random random1 = new Random (); long sum1 = 0 ; for (int i = 0 ; i < base; i++) { double r = random1.nextDouble() * 100 ; Long precisionLong = PrecisionCalculationUtil.getPrecisionLong(r); sum1 += precisionLong; list3.add(precisionLong); } System.err.println(sum1 / base); return list3; }
https://zhuanlan.zhihu.com/p/415851066
https://blog.51cto.com/lwc0329/4980616
Ehcache 本地缓存
依赖 1 2 3 4 5 <dependency> <groupId>org.ehcache</groupId> <artifactId>ehcache</artifactId> <version>3.8.0</version> </dependency>
config 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 /** * 1、先创建一个CacheManagerBuilder; * 2、使用CacheManagerBuilder创建一个预配置(pre-configured)缓存:第一个参数为别名,第二个参数用来配置Cache; * 3、build方法构建并初始化;build中true参数表示进行初始化。 **/ @Bean public CacheManager ehCacheManager(){ CacheManager cacheManager = CacheManagerBuilder.newCacheManagerBuilder() .withCache(EhCacheConstant.commonCache, getCommonCacheConfiguration()) .withCache(EhCacheConstant.likesCache, getLikesCacheConfiguration()) .build(true); return cacheManager; } // @Primary @Bean public Cache<String, Object> featurePostCache(CacheManager ehCacheManager){ // 取回在设定的pre-configured,对于key和value值类型,要求是类型安全的,否则将抛出ClassCastException异常。 return ehCacheManager.getCache(EhCacheConstant.featurePostCache, String.class, Object.class); } /** * 缓存类型,key(String)-value(Object),堆存储,1000 entries存储条目上线超限则淘汰最早数据,过期时间2小时 * @return */ public CacheConfigurationBuilder<String, Object> getLikesCacheConfiguration(){ CacheConfigurationBuilder<String, Object> cacheConfigurationBuilder = CacheConfigurationBuilder.newCacheConfigurationBuilder(String.class, Object.class, ResourcePoolsBuilder.newResourcePoolsBuilder() .heap(1000, EntryUnit.ENTRIES)) .withExpiry(ExpiryPolicyBuilder.timeToIdleExpiration(Duration.ofHours(2))); return cacheConfigurationBuilder; } /** * 公共缓存类型,key(String)-value(Object),堆存储,1000 entries存储条目上线超限则淘汰最早数据,过期时间60分钟 * @return */ public CacheConfigurationBuilder<String, Object> getCommonCacheConfiguration(){ CacheConfigurationBuilder<String, Object> cacheConfigurationBuilder = CacheConfigurationBuilder.newCacheConfigurationBuilder(String.class, Object.class, ResourcePoolsBuilder.newResourcePoolsBuilder() .heap(1000, EntryUnit.ENTRIES)) .withExpiry(ExpiryPolicyBuilder.timeToIdleExpiration(Duration.ofMinutes(60))); return cacheConfigurationBuilder; }
如果这里需要注入多个不同名称的Cache<String, Object>的 bean,如果仅仅使用注入注解会提示找到两个,导致启动失败。可以使用@Primary,如果没有指定名称则默认这个配置的。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 @Resource(name = "featurePostCache") Cache<String, Object> featurePostCache; @Autowired @Qualifier(value = "commonCache") Cache<String, Object> commonCache; @Resource Cache<String, Object>[] commonCacheArr; @Resource Cache<String, Object> commonCacheTest;
启动类 添加@EnableCaching注解,开启Ehcache缓存
1 2 3 4 5 6 7 @EnableCaching @SpringBootApplication public class SpringBootMainApplication { public static void main (String[] args) { SpringApplication.run(SpringBootMainApplication.class, args); } }
使用 往commonCache别名配置堆里添加缓存
1 2 3 4 5 6 7 @Autowired Cache<String, Object> commconCache; Object commonCache = featurePostCache.get(EhCacheConstant.commonCache);commconCache.put("key" , "test_value" ); commonCache = featurePostCache.get(EhCacheConstant.commonCache);
也可配合@Cacheable(value = "personCache", key = "#入参")
使用
随机码 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 public class InviteCodeGenerateUtil { private static final char [] BASE = new char []{'M' , '8' , 'I' , 'Y' , 'X' , '9' , 'D' , '6' , '3' , 'G' , 'Q' , '7' , 'P' , 'C' , 'N' , 'Z' , '5' , 'U' , 'J' , 'F' , 'R' , '4' , 'E' , 'V' , 'W' , 'L' , 'H' , 'K' , 'S' , 'T' , 'B' , '2' }; private static final char SUFFIX_CHAR = 'A' ; private static final int BIN_LEN = BASE.length; public static final int CODE_LEN = 6 ; public static String idToCode (Long id) { char [] buf = new char [BIN_LEN]; int charPos = BIN_LEN; while (id / BIN_LEN > 0 ) { int index = (int ) (id % BIN_LEN); buf[--charPos] = BASE[index]; id /= BIN_LEN; } buf[--charPos] = BASE[(int ) (id % BIN_LEN)]; String result = new String (buf, charPos, BIN_LEN - charPos); int len = result.length(); if (len < CODE_LEN) { StringBuilder sb = new StringBuilder (); sb.append(SUFFIX_CHAR); Random random = new Random (); for (int i = 0 ; i < CODE_LEN - len - 1 ; i++) { sb.append(BASE[random.nextInt(BIN_LEN)]); } result += sb.toString(); } return result; } public static Long codeToId (String code) { char [] charArray = code.toCharArray(); long result = 0L ; for (int i = 0 ; i < charArray.length; i++) { int index = 0 ; for (int j = 0 ; j < BIN_LEN; j++) { if (charArray[i] == BASE[j]) { index = j; break ; } } if (charArray[i] == SUFFIX_CHAR) { break ; } if (i > 0 ) { result = result * BIN_LEN + index; } else { result = index; } } return result; } public static void main (String[] args) throws Exception { HashSet<Object> set = new HashSet <>(); for (Long i = 0L ; i < 20000000 ; i++) { String code = idToCode(i); if (!set.add(code)) { System.out.println("code is " + code + ",id is " + i); throw new Exception (); } } } }
杂 1、哈希碰撞也就是指的是不同的对象得到相同的 hashCode
2、我们以“HashSet
如何检查重复”为例子来说明为什么要有 hashCode
?
下面这段内容摘自我的 Java 启蒙书《Head First Java》:
当你把对象加入 HashSet
时,HashSet
会先计算对象的 hashCode
值来判断对象加入的位置,同时也会与其他已经加入的对象的 hashCode
值作比较,如果没有相符的 hashCode
,HashSet
会假设对象没有重复出现。但是如果发现有相同 hashCode
值的对象,这时会调用 equals()
方法来检查 hashCode
相等的对象是否真的相同。如果两者相同,HashSet
就不会让其加入操作成功。如果不同的话,就会重新散列到其他位置。这样我们就大大减少了 equals
的次数,相应就大大提高了执行速度。
3、序列化的主要目的是通过网络传输对象或者说是将对象存储到文件系统、数据库、内存中。
OSI 七层协议模型中,表示层做的事情主要就是对应用层的用户数据进行处理转换为二进制流。反过来的话,就是将二进制流转换成应用层的用户数据。这不就对应的是序列化和反序列化么?