Changelog in Linux kernel 6.6.116

 
arch: Add the macro COMPILE_OFFSETS to all the asm-offsets.c [+ + +]
Author: Menglong Dong <[email protected]>
Date:   Wed Sep 17 14:09:13 2025 +0800

    arch: Add the macro COMPILE_OFFSETS to all the asm-offsets.c
    
    [ Upstream commit 35561bab768977c9e05f1f1a9bc00134c85f3e28 ]
    
    The include/generated/asm-offsets.h is generated in Kbuild during
    compiling from arch/SRCARCH/kernel/asm-offsets.c. When we want to
    generate another similar offset header file, circular dependency can
    happen.
    
    For example, we want to generate a offset file include/generated/test.h,
    which is included in include/sched/sched.h. If we generate asm-offsets.h
    first, it will fail, as include/sched/sched.h is included in asm-offsets.c
    and include/generated/test.h doesn't exist; If we generate test.h first,
    it can't success neither, as include/generated/asm-offsets.h is included
    by it.
    
    In x86_64, the macro COMPILE_OFFSETS is used to avoid such circular
    dependency. We can generate asm-offsets.h first, and if the
    COMPILE_OFFSETS is defined, we don't include the "generated/test.h".
    
    And we define the macro COMPILE_OFFSETS for all the asm-offsets.c for this
    purpose.
    
    Signed-off-by: Menglong Dong <[email protected]>
    Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
audit: record fanotify event regardless of presence of rules [+ + +]
Author: Richard Guy Briggs <[email protected]>
Date:   Wed Aug 6 17:04:07 2025 -0400

    audit: record fanotify event regardless of presence of rules
    
    [ Upstream commit ce8370e2e62a903e18be7dd0e0be2eee079501e1 ]
    
    When no audit rules are in place, fanotify event results are
    unconditionally dropped due to an explicit check for the existence of
    any audit rules.  Given this is a report from another security
    sub-system, allow it to be recorded regardless of the existence of any
    audit rules.
    
    To test, install and run the fapolicyd daemon with default config.  Then
    as an unprivileged user, create and run a very simple binary that should
    be denied.  Then check for an event with
            ausearch -m FANOTIFY -ts recent
    
    Link: https://issues.redhat.com/browse/RHEL-9065
    Signed-off-by: Richard Guy Briggs <[email protected]>
    Signed-off-by: Paul Moore <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
bits: add comments and newlines to #if, #else and #endif directives [+ + +]
Author: Vincent Mailhol <[email protected]>
Date:   Fri Oct 31 22:11:57 2025 +0900

    bits: add comments and newlines to #if, #else and #endif directives
    
    [ Upstream commit 31299a5e0211241171b2222c5633aad4763bf700 ]
    
    This is a preparation for the upcoming GENMASK_U*() and BIT_U*()
    changes. After introducing those new macros, there will be a lot of
    scrolling between the #if, #else and #endif.
    
    Add a comment to the #else and #endif preprocessor macros to help keep
    track of which context we are in. Also, add new lines to better
    visually separate the non-asm and asm sections.
    
    Signed-off-by: Vincent Mailhol <[email protected]>
    Reviewed-by: Andy Shevchenko <[email protected]>
    Signed-off-by: Yury Norov <[email protected]>
    Stable-dep-of: 2ba5772e530f ("gpio: idio-16: Define fixed direction of the GPIO lines")
    Signed-off-by: William Breathitt Gray <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

bits: introduce fixed-type GENMASK_U*() [+ + +]
Author: Vincent Mailhol <[email protected]>
Date:   Fri Oct 31 22:11:58 2025 +0900

    bits: introduce fixed-type GENMASK_U*()
    
    [ Upstream commit 19408200c094858d952a90bf4977733dc89a4df5 ]
    
    Add GENMASK_TYPE() which generalizes __GENMASK() to support different
    types, and implement fixed-types versions of GENMASK() based on it.
    The fixed-type version allows more strict checks to the min/max values
    accepted, which is useful for defining registers like implemented by
    i915 and xe drivers with their REG_GENMASK*() macros.
    
    The strict checks rely on shift-count-overflow compiler check to fail
    the build if a number outside of the range allowed is passed.
    Example:
    
      #define FOO_MASK GENMASK_U32(33, 4)
    
    will generate a warning like:
    
      include/linux/bits.h:51:27: error: right shift count >= width of type [-Werror=shift-count-overflow]
         51 |               type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h)))))
            |                           ^~
    
    The result is casted to the corresponding fixed width type. For
    example, GENMASK_U8() returns an u8. Note that because of the C
    promotion rules, GENMASK_U8() and GENMASK_U16() will immediately be
    promoted to int if used in an expression. Regardless, the main goal is
    not to get the correct type, but rather to enforce more checks at
    compile time.
    
    While GENMASK_TYPE() is crafted to cover all variants, including the
    already existing GENMASK(), GENMASK_ULL() and GENMASK_U128(), for the
    moment, only use it for the newly introduced GENMASK_U*(). The
    consolidation will be done in a separate change.
    
    Co-developed-by: Yury Norov <[email protected]>
    Signed-off-by: Lucas De Marchi <[email protected]>
    Acked-by: Jani Nikula <[email protected]>
    Signed-off-by: Vincent Mailhol <[email protected]>
    Reviewed-by: Andy Shevchenko <[email protected]>
    Signed-off-by: Yury Norov <[email protected]>
    Stable-dep-of: 2ba5772e530f ("gpio: idio-16: Define fixed direction of the GPIO lines")
    Signed-off-by: William Breathitt Gray <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
btrfs: always drop log root tree reference in btrfs_replay_log() [+ + +]
Author: Filipe Manana <[email protected]>
Date:   Wed Aug 27 12:10:28 2025 +0100

    btrfs: always drop log root tree reference in btrfs_replay_log()
    
    [ Upstream commit 2f5b8095ea47b142c56c09755a8b1e14145a2d30 ]
    
    Currently we have this odd behaviour:
    
    1) At btrfs_replay_log() we drop the reference of the log root tree if
       the call to btrfs_recover_log_trees() failed;
    
    2) But if the call to btrfs_recover_log_trees() did not fail, we don't
       drop the reference in btrfs_replay_log() - we expect that
       btrfs_recover_log_trees() does it in case it returns success.
    
    Let's simplify this and make btrfs_replay_log() always drop the reference
    on the log root tree, not only this simplifies code as it's what makes
    sense since it's btrfs_replay_log() who grabbed the reference in the first
    place.
    
    Signed-off-by: Filipe Manana <[email protected]>
    Reviewed-by: David Sterba <[email protected]>
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

btrfs: scrub: replace max_t()/min_t() with clamp() in scrub_throttle_dev_io() [+ + +]
Author: Thorsten Blum <[email protected]>
Date:   Mon Sep 1 17:01:44 2025 +0200

    btrfs: scrub: replace max_t()/min_t() with clamp() in scrub_throttle_dev_io()
    
    [ Upstream commit a7f3dfb8293c4cee99743132d69863a92e8f4875 ]
    
    Replace max_t() followed by min_t() with a single clamp().
    
    As was pointed by David Laight in
    https://lore.kernel.org/linux-btrfs/20250906122458.75dfc8f0@pumpkin/
    the calculation may overflow u32 when the input value is too large, so
    clamp_t() is not used.  In practice the expected values are in range of
    megabytes to gigabytes (throughput limit) so the bug would not happen.
    
    Signed-off-by: Thorsten Blum <[email protected]>
    Reviewed-by: David Sterba <[email protected]>
    [ Use clamp() and add explanation. ]
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

btrfs: use level argument in log tree walk callback replay_one_buffer() [+ + +]
Author: Filipe Manana <[email protected]>
Date:   Thu Aug 28 17:46:18 2025 +0100

    btrfs: use level argument in log tree walk callback replay_one_buffer()
    
    [ Upstream commit 6cb7f0b8c9b0d6a35682335fea88bd26f089306f ]
    
    We already have the extent buffer's level in an argument, there's no need
    to first ensure the extent buffer's data is loaded (by calling
    btrfs_read_extent_buffer()) and then call btrfs_header_level() to check
    the level. So use the level argument and do the check before calling
    btrfs_read_extent_buffer().
    
    Signed-off-by: Filipe Manana <[email protected]>
    Reviewed-by: David Sterba <[email protected]>
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

btrfs: use smp_mb__after_atomic() when forcing COW in create_pending_snapshot() [+ + +]
Author: Filipe Manana <[email protected]>
Date:   Mon Sep 22 12:09:14 2025 +0100

    btrfs: use smp_mb__after_atomic() when forcing COW in create_pending_snapshot()
    
    [ Upstream commit 45c222468d33202c07c41c113301a4b9c8451b8f ]
    
    After setting the BTRFS_ROOT_FORCE_COW flag on the root we are doing a
    full write barrier, smp_wmb(), but we don't need to, all we need is a
    smp_mb__after_atomic().  The use of the smp_wmb() is from the old days
    when we didn't use a bit and used instead an int field in the root to
    signal if cow is forced. After the int field was changed to a bit in
    the root's state (flags field), we forgot to update the memory barrier
    in create_pending_snapshot() to smp_mb__after_atomic(), but we did the
    change in commit_fs_roots() after clearing BTRFS_ROOT_FORCE_COW. That
    happened in commit 27cdeb7096b8 ("Btrfs: use bitfield instead of integer
    data type for the some variants in btrfs_root"). On the reader side, in
    should_cow_block(), we also use the counterpart smp_mb__before_atomic()
    which generates further confusion.
    
    So change the smp_wmb() to smp_mb__after_atomic(). In fact we don't
    even need any barrier at all since create_pending_snapshot() is called
    in the critical section of a transaction commit and therefore no one
    can concurrently join/attach the transaction, or start a new one, until
    the transaction is unblocked. By the time someone starts a new transaction
    and enters should_cow_block(), a lot of implicit memory barriers already
    took place by having acquired several locks such as fs_info->trans_lock
    and extent buffer locks on the root node at least. Nevertlheless, for
    consistency use smp_mb__after_atomic() after setting the force cow bit
    in create_pending_snapshot().
    
    Signed-off-by: Filipe Manana <[email protected]>
    Reviewed-by: David Sterba <[email protected]>
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

btrfs: zoned: refine extent allocator hint selection [+ + +]
Author: Naohiro Aota <[email protected]>
Date:   Wed Jul 16 11:13:15 2025 +0900

    btrfs: zoned: refine extent allocator hint selection
    
    [ Upstream commit 0d703963d297964451783e1a0688ebdf74cd6151 ]
    
    The hint block group selection in the extent allocator is wrong in the
    first place, as it can select the dedicated data relocation block group for
    the normal data allocation.
    
    Since we separated the normal data space_info and the data relocation
    space_info, we can easily identify a block group is for data relocation or
    not. Do not choose it for the normal data allocation.
    
    Reviewed-by: Johannes Thumshirn <[email protected]>
    Signed-off-by: Naohiro Aota <[email protected]>
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

btrfs: zoned: return error from btrfs_zone_finish_endio() [+ + +]
Author: Johannes Thumshirn <[email protected]>
Date:   Tue Jul 22 13:39:11 2025 +0200

    btrfs: zoned: return error from btrfs_zone_finish_endio()
    
    [ Upstream commit 3c44cd3c79fcb38a86836dea6ff8fec322a9e68c ]
    
    Now that btrfs_zone_finish_endio_workfn() is directly calling
    do_zone_finish() the only caller of btrfs_zone_finish_endio() is
    btrfs_finish_one_ordered().
    
    btrfs_finish_one_ordered() already has error handling in-place so
    btrfs_zone_finish_endio() can return an error if the block group lookup
    fails.
    
    Also as btrfs_zone_finish_endio() already checks for zoned filesystems and
    returns early, there's no need to do this in the caller.
    
    Reviewed-by: Damien Le Moal <[email protected]>
    Signed-off-by: Johannes Thumshirn <[email protected]>
    Signed-off-by: David Sterba <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>

 
EDAC/mc_sysfs: Increase legacy channel support to 16 [+ + +]
Author: Avadhut Naik <[email protected]>
Date:   Tue Sep 16 20:30:17 2025 +0000

    EDAC/mc_sysfs: Increase legacy channel support to 16
    
    [ Upstream commit 6e1c2c6c2c40ce99e0d2633b212f43c702c1a002 ]
    
    Newer AMD systems can support up to 16 channels per EDAC "mc" device.
    These are detected by the EDAC module running on the device, and the
    current EDAC interface is appropriately enumerated.
    
    The legacy EDAC sysfs interface however, provides device attributes for
    channels 0 through 11 only. Consequently, the last four channels, 12
    through 15, will not be enumerated and will not be visible through the
    legacy sysfs interface.
    
    Add additional device attributes to ensure that all 16 channels, if
    present, are enumerated by and visible through the legacy EDAC sysfs
    interface.
    
    Signed-off-by: Avadhut Naik <[email protected]>
    Signed-off-by: Borislav Petkov (AMD) <[email protected]>
    Link: https://lore.kernel.org/[email protected]
    Signed-off-by: Sasha Levin <[email protected]>

 
gpio: idio-16: Define fixed direction of the GPIO lines [+ + +]
Author: William Breathitt Gray <[email protected]>
Date:   Fri Oct 31 22:12:01 2025 +0900

    gpio: idio-16: Define fixed direction of the GPIO lines
    
    [ Upstream commit 2ba5772e530f73eb847fb96ce6c4017894869552 ]
    
    The direction of the IDIO-16 GPIO lines is fixed with the first 16 lines
    as output and the remaining 16 lines as input. Set the gpio_config
    fixed_direction_output member to represent the fixed direction of the
    GPIO lines.
    
    Fixes: db02247827ef ("gpio: idio-16: Migrate to the regmap API")
    Reported-by: Mark Cave-Ayland <[email protected]>
    Closes: https://lore.kernel.org/r/[email protected]
    Suggested-by: Michael Walle <[email protected]>
    Cc: [email protected] # ae495810cffe: gpio: regmap: add the .fixed_direction_output configuration parameter
    Cc: [email protected]
    Reviewed-by: Andy Shevchenko <[email protected]>
    Signed-off-by: William Breathitt Gray <[email protected]>
    Reviewed-by: Linus Walleij <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Bartosz Golaszewski <[email protected]>
    Signed-off-by: William Breathitt Gray <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

gpio: regmap: add the .fixed_direction_output configuration parameter [+ + +]
Author: Ioana Ciornei <[email protected]>
Date:   Fri Oct 31 22:12:00 2025 +0900

    gpio: regmap: add the .fixed_direction_output configuration parameter
    
    [ Upstream commit 00aaae60faf554c27c95e93d47f200a93ff266ef ]
    
    There are GPIO controllers such as the one present in the LX2160ARDB
    QIXIS FPGA which have fixed-direction input and output GPIO lines mixed
    together in a single register. This cannot be modeled using the
    gpio-regmap as-is since there is no way to present the true direction of
    a GPIO line.
    
    In order to make this use case possible, add a new configuration
    parameter - fixed_direction_output - into the gpio_regmap_config
    structure. This will enable user drivers to provide a bitmap that
    represents the fixed direction of the GPIO lines.
    
    Signed-off-by: Ioana Ciornei <[email protected]>
    Acked-by: Bartosz Golaszewski <[email protected]>
    Reviewed-by: Michael Walle <[email protected]>
    Signed-off-by: Bartosz Golaszewski <[email protected]>
    Stable-dep-of: 2ba5772e530f ("gpio: idio-16: Define fixed direction of the GPIO lines")
    Signed-off-by: William Breathitt Gray <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

gpio: regmap: Allow to allocate regmap-irq device [+ + +]
Author: Mathieu Dubois-Briand <[email protected]>
Date:   Fri Oct 31 22:11:59 2025 +0900

    gpio: regmap: Allow to allocate regmap-irq device
    
    [ Upstream commit 553b75d4bfe9264f631d459fe9996744e0672b0e ]
    
    GPIO controller often have support for IRQ: allow to easily allocate
    both gpio-regmap and regmap-irq in one operation.
    
    Reviewed-by: Andy Shevchenko <[email protected]>
    Acked-by: Bartosz Golaszewski <[email protected]>
    Signed-off-by: Mathieu Dubois-Briand <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Lee Jones <[email protected]>
    Stable-dep-of: 2ba5772e530f ("gpio: idio-16: Define fixed direction of the GPIO lines")
    Signed-off-by: William Breathitt Gray <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
Linux: Linux 6.6.116 [+ + +]
Author: Greg Kroah-Hartman <[email protected]>
Date:   Sun Nov 2 22:14:42 2025 +0900

    Linux 6.6.116
    
    Link: https://lore.kernel.org/r/[email protected]
    Tested-by: Peter Schneider <[email protected]>
    Tested-by: Florian Fainelli <[email protected]>
    Tested-by: Jon Hunter <[email protected]>
    Tested-by: Shuah Khan <[email protected]>
    Tested-by: Linux Kernel Functional Testing <[email protected]>
    Tested-by: Ron Economos <[email protected]>
    Tested-by: Brett A C Sheffield <[email protected]>
    Tested-by: Miguel Ojeda <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
mptcp: pm: in-kernel: C-flag: handle late ADD_ADDR [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Mon Oct 27 12:28:55 2025 -0400

    mptcp: pm: in-kernel: C-flag: handle late ADD_ADDR
    
    [ Upstream commit e84cb860ac3ce67ec6ecc364433fd5b412c448bc ]
    
    The special C-flag case expects the ADD_ADDR to be received when
    switching to 'fully-established'. But for various reasons, the ADD_ADDR
    could be sent after the "4th ACK", and the special case doesn't work.
    
    On NIPA, the new test validating this special case for the C-flag failed
    a few times, e.g.
    
      102 default limits, server deny join id 0
            syn rx                 [FAIL] got 0 JOIN[s] syn rx expected 2
    
      Server ns stats
      (...)
      MPTcpExtAddAddrTx  1
      MPTcpExtEchoAdd    1
    
      Client ns stats
      (...)
      MPTcpExtAddAddr    1
      MPTcpExtEchoAddTx  1
    
            synack rx              [FAIL] got 0 JOIN[s] synack rx expected 2
            ack rx                 [FAIL] got 0 JOIN[s] ack rx expected 2
            join Rx                [FAIL] see above
            syn tx                 [FAIL] got 0 JOIN[s] syn tx expected 2
            join Tx                [FAIL] see above
    
    I had a suspicion about what the issue could be: the ADD_ADDR might have
    been received after the switch to the 'fully-established' state. The
    issue was not easy to reproduce. The packet capture shown that the
    ADD_ADDR can indeed be sent with a delay, and the client would not try
    to establish subflows to it as expected.
    
    A simple fix is not to mark the endpoints as 'used' in the C-flag case,
    when looking at creating subflows to the remote initial IP address and
    port. In this case, there is no need to try.
    
    Note: newly added fullmesh endpoints will still continue to be used as
    expected, thanks to the conditions behind mptcp_pm_add_addr_c_flag_case.
    
    Fixes: 4b1ff850e0c1 ("mptcp: pm: in-kernel: usable client side with C-flag")
    Cc: [email protected]
    Reviewed-by: Geliang Tang <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/20251020-net-mptcp-c-flag-late-add-addr-v1-1-8207030cb0e8@kernel.org
    Signed-off-by: Jakub Kicinski <[email protected]>
    [ applied to pm_netlink.c instead of pm_kernel.c ]
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
net/sched: sch_qfq: Fix null-deref in agg_dequeue [+ + +]
Author: Xiang Mei <[email protected]>
Date:   Sat Jul 5 14:21:43 2025 -0700

    net/sched: sch_qfq: Fix null-deref in agg_dequeue
    
    commit dd831ac8221e691e9e918585b1003c7071df0379 upstream.
    
    To prevent a potential crash in agg_dequeue (net/sched/sch_qfq.c)
    when cl->qdisc->ops->peek(cl->qdisc) returns NULL, we check the return
    value before using it, similar to the existing approach in sch_hfsc.c.
    
    To avoid code duplication, the following changes are made:
    
    1. Changed qdisc_warn_nonwc(include/net/pkt_sched.h) into a static
    inline function.
    
    2. Moved qdisc_peek_len from net/sched/sch_hfsc.c to
    include/net/pkt_sched.h so that sch_qfq can reuse it.
    
    3. Applied qdisc_peek_len in agg_dequeue to avoid crashing.
    
    Signed-off-by: Xiang Mei <[email protected]>
    Reviewed-by: Cong Wang <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Paolo Abeni <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
 
perf: Have get_perf_callchain() return NULL if crosstask and user are set [+ + +]
Author: Josh Poimboeuf <[email protected]>
Date:   Wed Aug 20 14:03:40 2025 -0400

    perf: Have get_perf_callchain() return NULL if crosstask and user are set
    
    [ Upstream commit 153f9e74dec230f2e070e16fa061bc7adfd2c450 ]
    
    get_perf_callchain() doesn't support cross-task unwinding for user space
    stacks, have it return NULL if both the crosstask and user arguments are
    set.
    
    Signed-off-by: Josh Poimboeuf <[email protected]>
    Signed-off-by: Steven Rostedt (Google) <[email protected]>
    Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Sasha Levin <[email protected]>

perf: Skip user unwind if the task is a kernel thread [+ + +]
Author: Josh Poimboeuf <[email protected]>
Date:   Wed Aug 20 14:03:43 2025 -0400

    perf: Skip user unwind if the task is a kernel thread
    
    [ Upstream commit 16ed389227651330879e17bd83d43bd234006722 ]
    
    If the task is not a user thread, there's no user stack to unwind.
    
    Signed-off-by: Josh Poimboeuf <[email protected]>
    Signed-off-by: Steven Rostedt (Google) <[email protected]>
    Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Sasha Levin <[email protected]>

perf: Use current->flags & PF_KTHREAD|PF_USER_WORKER instead of current->mm == NULL [+ + +]
Author: Steven Rostedt <[email protected]>
Date:   Wed Aug 20 14:03:41 2025 -0400

    perf: Use current->flags & PF_KTHREAD|PF_USER_WORKER instead of current->mm == NULL
    
    [ Upstream commit 90942f9fac05702065ff82ed0bade0d08168d4ea ]
    
    To determine if a task is a kernel thread or not, it is more reliable to
    use (current->flags & (PF_KTHREAD|PF_USER_WORKERi)) than to rely on
    current->mm being NULL.  That is because some kernel tasks (io_uring
    helpers) may have a mm field.
    
    Signed-off-by: Steven Rostedt (Google) <[email protected]>
    Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Sasha Levin <[email protected]>

 
selftests: mptcp: disable add_addr retrans in endpoint_tests [+ + +]
Author: Geliang Tang <[email protected]>
Date:   Mon Oct 27 10:37:05 2025 -0400

    selftests: mptcp: disable add_addr retrans in endpoint_tests
    
    [ Upstream commit f92199f551e617fae028c5c5905ddd63e3616e18 ]
    
    To prevent test instability in the "delete re-add signal" test caused by
    ADD_ADDR retransmissions, disable retransmissions for this test by setting
    net.mptcp.add_addr_timeout to 0.
    
    Suggested-by: Matthieu Baerts <[email protected]>
    Signed-off-by: Geliang Tang <[email protected]>
    Reviewed-by: Matthieu Baerts (NGI0) <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/20250815-net-mptcp-misc-fixes-6-17-rc2-v1-6-521fe9957892@kernel.org
    Signed-off-by: Jakub Kicinski <[email protected]>
    Stable-dep-of: c3496c052ac3 ("selftests: mptcp: join: mark 'delete re-add signal' as skipped if not supported")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

selftests: mptcp: join: mark 'delete re-add signal' as skipped if not supported [+ + +]
Author: Matthieu Baerts (NGI0) <[email protected]>
Date:   Mon Oct 27 10:37:06 2025 -0400

    selftests: mptcp: join: mark 'delete re-add signal' as skipped if not supported
    
    [ Upstream commit c3496c052ac36ea98ec4f8e95ae6285a425a2457 ]
    
    The call to 'continue_if' was missing: it properly marks a subtest as
    'skipped' if the attached condition is not valid.
    
    Without that, the test is wrongly marked as passed on older kernels.
    
    Fixes: b5e2fb832f48 ("selftests: mptcp: add explicit test case for remove/readd")
    Cc: [email protected]
    Reviewed-by: Geliang Tang <[email protected]>
    Signed-off-by: Matthieu Baerts (NGI0) <[email protected]>
    Link: https://patch.msgid.link/20251020-net-mptcp-c-flag-late-add-addr-v1-4-8207030cb0e8@kernel.org
    Signed-off-by: Jakub Kicinski <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
serial: sc16is7xx: refactor EFR lock [+ + +]
Author: Hugo Villeneuve <[email protected]>
Date:   Mon Oct 27 14:42:45 2025 -0400

    serial: sc16is7xx: refactor EFR lock
    
    [ Upstream commit 0c84bea0cabc4e2b98a3de88eeb4ff798931f056 ]
    
    Move common code for EFR lock/unlock of mutex into functions for code reuse
    and clarity.
    
    With the addition of old_lcr, move irda_mode within struct sc16is7xx_one to
    reduce memory usage:
        Before: /* size: 752, cachelines: 12, members: 10 */
        After:  /* size: 744, cachelines: 12, members: 10 */
    
    Signed-off-by: Hugo Villeneuve <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: 1c05bf6c0262 ("serial: sc16is7xx: remove useless enable of enhanced features")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

serial: sc16is7xx: remove unused to_sc16is7xx_port macro [+ + +]
Author: Hugo Villeneuve <[email protected]>
Date:   Mon Oct 27 14:42:43 2025 -0400

    serial: sc16is7xx: remove unused to_sc16is7xx_port macro
    
    [ Upstream commit 22a048b0749346b6e3291892d06b95278d5ba84a ]
    
    This macro is not used anywhere.
    
    Signed-off-by: Hugo Villeneuve <[email protected]>
    Reviewed-by: Ilpo Järvinen <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: 1c05bf6c0262 ("serial: sc16is7xx: remove useless enable of enhanced features")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

serial: sc16is7xx: remove useless enable of enhanced features [+ + +]
Author: Hugo Villeneuve <[email protected]>
Date:   Mon Oct 27 14:42:46 2025 -0400

    serial: sc16is7xx: remove useless enable of enhanced features
    
    [ Upstream commit 1c05bf6c0262f946571a37678250193e46b1ff0f ]
    
    Commit 43c51bb573aa ("sc16is7xx: make sure device is in suspend once
    probed") permanently enabled access to the enhanced features in
    sc16is7xx_probe(), and it is never disabled after that.
    
    Therefore, remove re-enable of enhanced features in
    sc16is7xx_set_baud(). This eliminates a potential useless read + write
    cycle each time the baud rate is reconfigured.
    
    Fixes: 43c51bb573aa ("sc16is7xx: make sure device is in suspend once probed")
    Cc: stable <[email protected]>
    Signed-off-by: Hugo Villeneuve <[email protected]>
    Link: https://patch.msgid.link/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

serial: sc16is7xx: reorder code to remove prototype declarations [+ + +]
Author: Hugo Villeneuve <[email protected]>
Date:   Mon Oct 27 14:42:44 2025 -0400

    serial: sc16is7xx: reorder code to remove prototype declarations
    
    [ Upstream commit 2de8a1b46756b5a79d8447f99afdfe49e914225a ]
    
    Move/reorder some functions to remove sc16is7xx_ier_set() and
    sc16is7xx_stop_tx() prototypes declarations.
    
    No functional change.
    
    sc16is7xx_ier_set() was introduced in
    commit cc4c1d05eb10 ("sc16is7xx: Properly resume TX after stop").
    
    Reviewed-by: Andy Shevchenko <[email protected]>
    Signed-off-by: Hugo Villeneuve <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: 1c05bf6c0262 ("serial: sc16is7xx: remove useless enable of enhanced features")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

 
x86/bugs: Fix reporting of LFENCE retpoline [+ + +]
Author: David Kaplan <[email protected]>
Date:   Mon Sep 15 08:47:05 2025 -0500

    x86/bugs: Fix reporting of LFENCE retpoline
    
    [ Upstream commit d1cc1baef67ac6c09b74629ca053bf3fb812f7dc ]
    
    The LFENCE retpoline mitigation is not secure but the kernel prints
    inconsistent messages about this fact.  The dmesg log says 'Mitigation:
    LFENCE', implying the system is mitigated.  But sysfs reports 'Vulnerable:
    LFENCE' implying the system (correctly) is not mitigated.
    
    Fix this by printing a consistent 'Vulnerable: LFENCE' string everywhere
    when this mitigation is selected.
    
    Signed-off-by: David Kaplan <[email protected]>
    Signed-off-by: Borislav Petkov (AMD) <[email protected]>
    Link: https://lore.kernel.org/[email protected]
    Signed-off-by: Sasha Levin <[email protected]>

x86/bugs: Report correct retbleed mitigation status [+ + +]
Author: David Kaplan <[email protected]>
Date:   Mon Sep 15 08:47:06 2025 -0500

    x86/bugs: Report correct retbleed mitigation status
    
    [ Upstream commit 930f2361fe542a00de9ce6070b1b6edb976f1165 ]
    
    On Intel CPUs, the default retbleed mitigation is IBRS/eIBRS but this
    requires that a similar spectre_v2 mitigation is applied.  If the user
    selects a different spectre_v2 mitigation (like spectre_v2=retpoline) a
    warning is printed but sysfs will still report 'Mitigation: IBRS' or
    'Mitigation: Enhanced IBRS'.  This is incorrect because retbleed is not
    mitigated, and IBRS is not actually set.
    
    Fix this by choosing RETBLEED_MITIGATION_NONE in this scenario so the
    kernel correctly reports the system as vulnerable to retbleed.
    
    Signed-off-by: David Kaplan <[email protected]>
    Signed-off-by: Borislav Petkov (AMD) <[email protected]>
    Link: https://lore.kernel.org/[email protected]
    Signed-off-by: Sasha Levin <[email protected]>

 
xhci: dbc: Allow users to modify DbC poll interval via sysfs [+ + +]
Author: Uday M Bhat <[email protected]>
Date:   Mon Oct 27 12:29:16 2025 -0400

    xhci: dbc: Allow users to modify DbC poll interval via sysfs
    
    [ Upstream commit de3edd47a18fe05a560847cc3165871474e08196 ]
    
    xhci DbC driver polls the host controller for DbC events at a reduced
    rate when DbC is enabled but there are no active data transfers.
    
    Allow users to modify this reduced poll interval via dbc_poll_interval_ms
    sysfs entry. Unit is milliseconds and accepted range is 0 to 5000.
    Max interval of 5000 ms is selected as it matches the common 5 second
    timeout used in usb stack.
    Default value is 64 milliseconds.
    
    A long interval is useful when users know there won't be any activity
    on systems connected via DbC for long periods, and want to avoid
    battery drainage due to unnecessary CPU usage.
    
    Example being Android Debugger (ADB) usage over DbC on ChromeOS systems
    running Android Runtime.
    
    [minor changes and rewording -Mathias]
    
    Co-developed-by: Samuel Jacob <[email protected]>
    Signed-off-by: Samuel Jacob <[email protected]>
    Signed-off-by: Uday M Bhat <[email protected]>
    Signed-off-by: Mathias Nyman <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: f3d12ec847b9 ("xhci: dbc: fix bogus 1024 byte prefix if ttyDBC read races with stall event")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

xhci: dbc: Avoid event polling busyloop if pending rx transfers are inactive. [+ + +]
Author: Mathias Nyman <[email protected]>
Date:   Mon Oct 27 12:29:18 2025 -0400

    xhci: dbc: Avoid event polling busyloop if pending rx transfers are inactive.
    
    [ Upstream commit cab63934c33b12c0d1e9f4da7450928057f2c142 ]
    
    Event polling delay is set to 0 if there are any pending requests in
    either rx or tx requests lists. Checking for pending requests does
    not work well for "IN" transfers as the tty driver always queues
    requests to the list and TRBs to the ring, preparing to receive data
    from the host.
    
    This causes unnecessary busylooping and cpu hogging.
    
    Only set the event polling delay to 0 if there are pending tx "write"
    transfers, or if it was less than 10ms since last active data transfer
    in any direction.
    
    Cc: Łukasz Bartosik <[email protected]>
    Fixes: fb18e5bb9660 ("xhci: dbc: poll at different rate depending on data transfer activity")
    Cc: [email protected]
    Signed-off-by: Mathias Nyman <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: f3d12ec847b9 ("xhci: dbc: fix bogus 1024 byte prefix if ttyDBC read races with stall event")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

xhci: dbc: fix bogus 1024 byte prefix if ttyDBC read races with stall event [+ + +]
Author: Mathias Nyman <[email protected]>
Date:   Mon Oct 27 12:29:19 2025 -0400

    xhci: dbc: fix bogus 1024 byte prefix if ttyDBC read races with stall event
    
    [ Upstream commit f3d12ec847b945d5d65846c85f062d07d5e73164 ]
    
    DbC may add 1024 bogus bytes to the beginneing of the receiving endpoint
    if DbC hw triggers a STALL event before any Transfer Blocks (TRBs) for
    incoming data are queued, but driver handles the event after it queued
    the TRBs.
    
    This is possible as xHCI DbC hardware may trigger spurious STALL transfer
    events even if endpoint is empty. The STALL event contains a pointer
    to the stalled TRB, and "remaining" untransferred data length.
    
    As there are no TRBs queued yet the STALL event will just point to first
    TRB position of the empty ring, with '0' bytes remaining untransferred.
    
    DbC driver is polling for events, and may not handle the STALL event
    before /dev/ttyDBC0 is opened and incoming data TRBs are queued.
    
    The DbC event handler will now assume the first queued TRB (length 1024)
    has stalled with '0' bytes remaining untransferred, and copies the data
    
    This race situation can be practically mitigated by making sure the event
    handler handles all pending transfer events when DbC reaches configured
    state, and only then create dev/ttyDbC0, and start queueing transfers.
    The event handler can this way detect the STALL events on empty rings
    and discard them before any transfers are queued.
    
    This does in practice solve the issue, but still leaves a small possible
    gap for the race to trigger.
    We still need a way to distinguish spurious STALLs on empty rings with '0'
    bytes remaing, from actual STALL events with all bytes transmitted.
    
    Cc: stable <[email protected]>
    Fixes: dfba2174dc42 ("usb: xhci: Add DbC support in xHCI driver")
    Tested-by: Łukasz Bartosik <[email protected]>
    Signed-off-by: Mathias Nyman <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

xhci: dbc: Improve performance by removing delay in transfer event polling. [+ + +]
Author: Mathias Nyman <[email protected]>
Date:   Mon Oct 27 12:29:17 2025 -0400

    xhci: dbc: Improve performance by removing delay in transfer event polling.
    
    [ Upstream commit 03e3d9c2bd85cda941b3cf78e895c1498ac05c5f ]
    
    Queue event polling work with 0 delay in case there are pending transfers
    queued up. This is part 2 of a 3 part series that roughly triples dbc
    performace when using adb push and pull over dbc.
    
    Max/min push rate after patches is 210/118 MB/s, pull rate 171/133 MB/s,
    tested with large files (300MB-9GB) by Łukasz Bartosik
    
    First performance improvement patch was commit 31128e7492dc
    ("xhci: dbc: add dbgtty request to end of list once it completes")
    
    Cc: Łukasz Bartosik <[email protected]>
    Signed-off-by: Mathias Nyman <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: f3d12ec847b9 ("xhci: dbc: fix bogus 1024 byte prefix if ttyDBC read races with stall event")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>

xhci: dbc: poll at different rate depending on data transfer activity [+ + +]
Author: Mathias Nyman <[email protected]>
Date:   Mon Oct 27 12:29:15 2025 -0400

    xhci: dbc: poll at different rate depending on data transfer activity
    
    [ Upstream commit fb18e5bb96603cc79d97f03e4c05f3992cf28624 ]
    
    DbC driver starts polling for events immediately when DbC is enabled.
    The current polling interval is 1ms, which keeps the CPU busy, impacting
    power management even when there are no active data transfers.
    
    Solve this by polling at a slower rate, with a 64ms interval as default
    until a transfer request is queued, or if there are still are pending
    unhandled transfers at event completion.
    
    Tested-by: Uday M Bhat <[email protected]>
    Signed-off-by: Mathias Nyman <[email protected]>
    Link: https://lore.kernel.org/r/[email protected]
    Signed-off-by: Greg Kroah-Hartman <[email protected]>
    Stable-dep-of: f3d12ec847b9 ("xhci: dbc: fix bogus 1024 byte prefix if ttyDBC read races with stall event")
    Signed-off-by: Sasha Levin <[email protected]>
    Signed-off-by: Greg Kroah-Hartman <[email protected]>