Threat and Vulnerability Intelligence Database

RSS Feed

Example Searches:

CVE-2023-52498

Description: In the Linux kernel, the following vulnerability has been resolved: PM: sleep: Fix possible deadlocks in core system-wide PM code It is reported that in low-memory situations the system-wide resume core code deadlocks, because async_schedule_dev() executes its argument function synchronously if it cannot allocate memory (and not only in that case) and that function attempts to acquire a mutex that is already held. Executing the argument function synchronously from within dpm_async_fn() may also be problematic for ordering reasons (it may cause a consumer device's resume callback to be invoked before a requisite supplier device's one, for example). Address this by changing the code in question to use async_schedule_dev_nocall() for scheduling the asynchronous execution of device suspend and resume functions and to directly run them synchronously if async_schedule_dev_nocall() returns false.

EPSS Score: 0.04%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52497

Description: In the Linux kernel, the following vulnerability has been resolved: erofs: fix lz4 inplace decompression Currently EROFS can map another compressed buffer for inplace decompression, that was used to handle the cases that some pages of compressed data are actually not in-place I/O. However, like most simple LZ77 algorithms, LZ4 expects the compressed data is arranged at the end of the decompressed buffer and it explicitly uses memmove() to handle overlapping: __________________________________________________________ |_ direction of decompression --> ____ |_ compressed data _| Although EROFS arranges compressed data like this, it typically maps two individual virtual buffers so the relative order is uncertain. Previously, it was hardly observed since LZ4 only uses memmove() for short overlapped literals and x86/arm64 memmove implementations seem to completely cover it up and they don't have this issue. Juhyung reported that EROFS data corruption can be found on a new Intel x86 processor. After some analysis, it seems that recent x86 processors with the new FSRM feature expose this issue with "rep movsb". Let's strictly use the decompressed buffer for lz4 inplace decompression for now. Later, as an useful improvement, we could try to tie up these two buffers together in the correct order.

EPSS Score: 0.04%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52495

Description: In the Linux kernel, the following vulnerability has been resolved: soc: qcom: pmic_glink_altmode: fix port sanity check The PMIC GLINK altmode driver currently supports at most two ports. Fix the incomplete port sanity check on notifications to avoid accessing and corrupting memory beyond the port array if we ever get a notification for an unsupported port.

EPSS Score: 0.05%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52494

Description: In the Linux kernel, the following vulnerability has been resolved: bus: mhi: host: Add alignment check for event ring read pointer Though we do check the event ring read pointer by "is_valid_ring_ptr" to make sure it is in the buffer range, but there is another risk the pointer may be not aligned. Since we are expecting event ring elements are 128 bits(struct mhi_ring_element) aligned, an unaligned read pointer could lead to multiple issues like DoS or ring buffer memory corruption. So add a alignment check for event ring read pointer.

EPSS Score: 0.05%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52493

Description: In the Linux kernel, the following vulnerability has been resolved: bus: mhi: host: Drop chan lock before queuing buffers Ensure read and write locks for the channel are not taken in succession by dropping the read lock from parse_xfer_event() such that a callback given to client can potentially queue buffers and acquire the write lock in that process. Any queueing of buffers should be done without channel read lock acquired as it can result in multiple locks and a soft lockup. [mani: added fixes tag and cc'ed stable]

EPSS Score: 0.04%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52492

Description: In the Linux kernel, the following vulnerability has been resolved: dmaengine: fix NULL pointer in channel unregistration function __dma_async_device_channel_register() can fail. In case of failure, chan->local is freed (with free_percpu()), and chan->local is nullified. When dma_async_device_unregister() is called (because of managed API or intentionally by DMA controller driver), channels are unconditionally unregistered, leading to this NULL pointer: [ 1.318693] Unable to handle kernel NULL pointer dereference at virtual address 00000000000000d0 [...] [ 1.484499] Call trace: [ 1.486930] device_del+0x40/0x394 [ 1.490314] device_unregister+0x20/0x7c [ 1.494220] __dma_async_device_channel_unregister+0x68/0xc0 Look at dma_async_device_register() function error path, channel device unregistration is done only if chan->local is not NULL. Then add the same condition at the beginning of __dma_async_device_channel_unregister() function, to avoid NULL pointer issue whatever the API used to reach this function.

EPSS Score: 0.04%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52491

Description: In the Linux kernel, the following vulnerability has been resolved: media: mtk-jpeg: Fix use after free bug due to error path handling in mtk_jpeg_dec_device_run In mtk_jpeg_probe, &jpeg->job_timeout_work is bound with mtk_jpeg_job_timeout_work. In mtk_jpeg_dec_device_run, if error happens in mtk_jpeg_set_dec_dst, it will finally start the worker while mark the job as finished by invoking v4l2_m2m_job_finish. There are two methods to trigger the bug. If we remove the module, it which will call mtk_jpeg_remove to make cleanup. The possible sequence is as follows, which will cause a use-after-free bug. CPU0 CPU1 mtk_jpeg_dec_... | start worker | |mtk_jpeg_job_timeout_work mtk_jpeg_remove | v4l2_m2m_release | kfree(m2m_dev); | | | v4l2_m2m_get_curr_priv | m2m_dev->curr_ctx //use If we close the file descriptor, which will call mtk_jpeg_release, it will have a similar sequence. Fix this bug by starting timeout worker only if started jpegdec worker successfully. Then v4l2_m2m_job_finish will only be called in either mtk_jpeg_job_timeout_work or mtk_jpeg_dec_device_run.

EPSS Score: 0.04%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52490

Description: In the Linux kernel, the following vulnerability has been resolved: mm: migrate: fix getting incorrect page mapping during page migration When running stress-ng testing, we found below kernel crash after a few hours: Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 pc : dentry_name+0xd8/0x224 lr : pointer+0x22c/0x370 sp : ffff800025f134c0 ...... Call trace: dentry_name+0xd8/0x224 pointer+0x22c/0x370 vsnprintf+0x1ec/0x730 vscnprintf+0x2c/0x60 vprintk_store+0x70/0x234 vprintk_emit+0xe0/0x24c vprintk_default+0x3c/0x44 vprintk_func+0x84/0x2d0 printk+0x64/0x88 __dump_page+0x52c/0x530 dump_page+0x14/0x20 set_migratetype_isolate+0x110/0x224 start_isolate_page_range+0xc4/0x20c offline_pages+0x124/0x474 memory_block_offline+0x44/0xf4 memory_subsys_offline+0x3c/0x70 device_offline+0xf0/0x120 ...... After analyzing the vmcore, I found this issue is caused by page migration. The scenario is that, one thread is doing page migration, and we will use the target page's ->mapping field to save 'anon_vma' pointer between page unmap and page move, and now the target page is locked and refcount is 1. Currently, there is another stress-ng thread performing memory hotplug, attempting to offline the target page that is being migrated. It discovers that the refcount of this target page is 1, preventing the offline operation, thus proceeding to dump the page. However, page_mapping() of the target page may return an incorr...

EPSS Score: 0.04%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52489

Description: In the Linux kernel, the following vulnerability has been resolved: mm/sparsemem: fix race in accessing memory_section->usage The below race is observed on a PFN which falls into the device memory region with the system memory configuration where PFN's are such that [ZONE_NORMAL ZONE_DEVICE ZONE_NORMAL]. Since normal zone start and end pfn contains the device memory PFN's as well, the compaction triggered will try on the device memory PFN's too though they end up in NOP(because pfn_to_online_page() returns NULL for ZONE_DEVICE memory sections). When from other core, the section mappings are being removed for the ZONE_DEVICE region, that the PFN in question belongs to, on which compaction is currently being operated is resulting into the kernel crash with CONFIG_SPASEMEM_VMEMAP enabled. The crash logs can be seen at [1]. compact_zone() memunmap_pages ------------- --------------- __pageblock_pfn_to_page ...... (a)pfn_valid(): valid_section()//return true (b)__remove_pages()-> sparse_remove_section()-> section_deactivate(): [Free the array ms->usage and set ms->usage = NULL] pfn_section_valid() [Access ms->usage which is NULL] NOTE: From the above it can be said that the race is reduced to between the pfn_valid()/pfn_section_valid() and the section deactivate with SPASEMEM_VMEMAP enabled. The commit b943f045a9af("mm/sparse: fix kernel crash with pfn_section_valid check") tried to address the same pro...

EPSS Score: 0.04%

Source: CVE
December 20th, 2024 (6 months ago)

CVE-2023-52488

Description: In the Linux kernel, the following vulnerability has been resolved: serial: sc16is7xx: convert from _raw_ to _noinc_ regmap functions for FIFO The SC16IS7XX IC supports a burst mode to access the FIFOs where the initial register address is sent ($00), followed by all the FIFO data without having to resend the register address each time. In this mode, the IC doesn't increment the register address for each R/W byte. The regmap_raw_read() and regmap_raw_write() are functions which can perform IO over multiple registers. They are currently used to read/write from/to the FIFO, and although they operate correctly in this burst mode on the SPI bus, they would corrupt the regmap cache if it was not disabled manually. The reason is that when the R/W size is more than 1 byte, these functions assume that the register address is incremented and handle the cache accordingly. Convert FIFO R/W functions to use the regmap _noinc_ versions in order to remove the manual cache control which was a workaround when using the _raw_ versions. FIFO registers are properly declared as volatile so cache will not be used/updated for FIFO accesses.

EPSS Score: 0.04%

Source: CVE
December 20th, 2024 (6 months ago)