Obscure startup wins prestigious CES 2024 award — you’ve probably never heard of it, but Panmnesia is the company that could make ChatGPT 6 (or 7) times faster


The highly coveted Innovation Award at the forthcoming Consumer Electronics Show (CES) 2024 event in January has been snapped up by a Korean startup for its AI accelerator.
Panmnesia has built its AI accelerator device on Compute Express Link (CXL) 3.0 technology, which allows an external memory pool to be shared with host computers, and components like the CPU, which can translate to near-limitless memory capacity. This is thanks to the incorporation of a CXL 3.0 controller into the accelerator chip.
CXL is used to connect system devices – including accelerators, memory expanders, processors, and switches. By linking up multiple accelerators and memory expanders using CXL switches, the technology can provide enough memory to an intensive system for AI applications.
What CXL 3.0 means for LLMs
The use of CXL 2.0 in devices like this would allow particular hosts access to their dedicated portion of pooled external memory, while the latest generation allows hosts to access the entire pool as and when needed.
“We believe that our CXL technology will be a cornerstone for next-generation AI acceleration system,” said Panmesia founder and CEO Myoungsoo Jung in a statement.
“We remain committed to our endeavor revolutionizing not only for AI acceleration system, but also other general-purpose environments such as data centers, cloud computing, and high-performance computing.”
Panmnesia’s technology works akin to how clusters of servers may share external SSDs to store data, and would be particularly useful for servers because they’ll often need to access more data that they can hold in the memory that’s in-built.
This device is built specifically for large-scale AI applications – and its creators claim it’s 101 times faster at performing AI-based search functions than conventional services, which use SSDs to store data, linked via networks. The architecture also minimizes energy costs and operational expenditure.
If used in the configuration of servers that the likes of OpenAI use to host its large language models (LLMs) such as ChatGPT, alongside hardware from other suppliers, it might drastically improve the performance of these models.
More from TechRadar Pro
The highly coveted Innovation Award at the forthcoming Consumer Electronics Show (CES) 2024 event in January has been snapped up by a Korean startup for its AI accelerator. Panmnesia has built its AI accelerator device on Compute Express Link (CXL) 3.0 technology, which allows an external memory pool to be…
Recent Posts
- Apple TV+ releases a gritty new crime drama trailer for Dope Thief that looks like a stylish version of The Wire
- The women who made America’s microchips and the children who paid for it
- Chinese hackers abuse Microsoft tool to get past antivirus and cause havoc
- Your Earbuds Are Gross. Here’s How to Clean Them Properly
- This smart video lock unlocks with a wave of your hand
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010