commit a3d313e3302feca0b4fc0660c05f94010f133552 Author: Ben Hutchings Date: Tue Aug 20 19:01:58 2019 +0100 Linux 3.16.73 commit ba7df77b948f22b519acfef0a8077bd4af05e85c Author: Jason A. Donenfeld Date: Sun Jan 8 13:54:01 2017 +0100 siphash: implement HalfSipHash1-3 for hash tables commit 1ae2324f732c9c4e2fa4ebd885fa1001b70d52e1 upstream. HalfSipHash, or hsiphash, is a shortened version of SipHash, which generates 32-bit outputs using a weaker 64-bit key. It has *much* lower security margins, and shouldn't be used for anything too sensitive, but it could be used as a hashtable key function replacement, if the output is never exposed, and if the security requirement is not too high. The goal is to make this something that performance-critical jhash users would be willing to use. On 64-bit machines, HalfSipHash1-3 is slower than SipHash1-3, so we alias SipHash1-3 to HalfSipHash1-3 on those systems. 64-bit x86_64: [ 0.509409] test_siphash: SipHash2-4 cycles: 4049181 [ 0.510650] test_siphash: SipHash1-3 cycles: 2512884 [ 0.512205] test_siphash: HalfSipHash1-3 cycles: 3429920 [ 0.512904] test_siphash: JenkinsHash cycles: 978267 So, we map hsiphash() -> SipHash1-3 32-bit x86: [ 0.509868] test_siphash: SipHash2-4 cycles: 14812892 [ 0.513601] test_siphash: SipHash1-3 cycles: 9510710 [ 0.515263] test_siphash: HalfSipHash1-3 cycles: 3856157 [ 0.515952] test_siphash: JenkinsHash cycles: 1148567 So, we map hsiphash() -> HalfSipHash1-3 hsiphash() is roughly 3 times slower than jhash(), but comes with a considerable security improvement. Signed-off-by: Jason A. Donenfeld Reviewed-by: Jean-Philippe Aumasson Signed-off-by: David S. Miller [bwh: Backported to 3.16 to avoid a build regression for WireGuard with only part of the siphash API available] Signed-off-by: Ben Hutchings commit 2d74faa1f0c8a23ffb83146f23526ea812f1ef7f Author: zhangyi (F) Date: Sat Mar 23 11:56:01 2019 -0400 ext4: cleanup bh release code in ext4_ind_remove_space() commit 5e86bdda41534e17621d5a071b294943cae4376e upstream. Currently, we are releasing the indirect buffer where we are done with it in ext4_ind_remove_space(), so we can see the brelse() and BUFFER_TRACE() everywhere. It seems fragile and hard to read, and we may probably forget to release the buffer some day. This patch cleans up the code by putting of the code which releases the buffers to the end of the function. Signed-off-by: zhangyi (F) Signed-off-by: Theodore Ts'o Reviewed-by: Jan Kara Signed-off-by: Ben Hutchings commit f1fbae6cd2bd84b17e399d969a2f889ac1bd9afb Author: zhangyi (F) Date: Sat Mar 23 11:43:05 2019 -0400 ext4: brelse all indirect buffer in ext4_ind_remove_space() commit 674a2b27234d1b7afcb0a9162e81b2e53aeef217 upstream. All indirect buffers get by ext4_find_shared() should be released no mater the branch should be freed or not. But now, we forget to release the lower depth indirect buffers when removing space from the same higher depth indirect block. It will lead to buffer leak and futher more, it may lead to quota information corruption when using old quota, consider the following case. - Create and mount an empty ext4 filesystem without extent and quota features, - quotacheck and enable the user & group quota, - Create some files and write some data to them, and then punch hole to some files of them, it may trigger the buffer leak problem mentioned above. - Disable quota and run quotacheck again, it will create two new aquota files and write the checked quota information to them, which probably may reuse the freed indirect block(the buffer and page cache was not freed) as data block. - Enable quota again, it will invoke vfs_load_quota_inode()->invalidate_bdev() to try to clean unused buffers and pagecache. Unfortunately, because of the buffer of quota data block is still referenced, quota code cannot read the up to date quota info from the device and lead to quota information corruption. This problem can be reproduced by xfstests generic/231 on ext3 file system or ext4 file system without extent and quota features. This patch fix this problem by releasing the missing indirect buffers, in ext4_ind_remove_space(). Reported-by: Hulk Robot Signed-off-by: zhangyi (F) Signed-off-by: Theodore Ts'o Reviewed-by: Jan Kara Signed-off-by: Ben Hutchings commit 3157fbc900bdb366b2186e5a6e506cc5e4697cf0 Author: Ben Hutchings Date: Sat Aug 3 17:11:42 2019 +0100 tcp: Clear sk_send_head after purging the write queue Denis Andzakovic discovered a potential use-after-free in older kernel versions, using syzkaller. tcp_write_queue_purge() frees all skbs in the TCP write queue and can leave sk->sk_send_head pointing to freed memory. tcp_disconnect() clears that pointer after calling tcp_write_queue_purge(), but tcp_connect() does not. It is (surprisingly) possible to add to the write queue between disconnection and reconnection, so this needs to be done in both places. This bug was introduced by backports of commit 7f582b248d0a ("tcp: purge write queue in tcp_connect_init()") and does not exist upstream because of earlier changes in commit 75c119afe14f ("tcp: implement rb-tree based retransmit queue"). The latter is a major change that's not suitable for stable. Reported-by: Denis Andzakovic Bisected-by: Salvatore Bonaccorso Fixes: 7f582b248d0a ("tcp: purge write queue in tcp_connect_init()") Cc: # before 4.15 Cc: Eric Dumazet Signed-off-by: Ben Hutchings