Liquid AI Announces New AI Models Built on Entirely New Architecture

Internet

Liquid AI, a Massachusetts-based artificial intelligence (AI) startup, announced its first generative AI models not built on the existing transformer architecture. Dubbed Liquid Foundation Model (LFM), the new architecture moves away from Generative Pre-trained Transformers (GPTs) which is the foundation for popular AI models such as the GPT series by OpenAI, Gemini, Copilot, and more. The startup claims that the new AI models were built from first principles and they outperform large language models (LLMs) in the comparable size bracket.

Liquid AI’s New Liquid Foundation Models

The startup was co-founded by researchers at the Massachusetts Institute of Technology (MIT)’s Computer Science and Artificial Intelligence Laboratory (CSAIL) in 2023 and aimed to build newer architecture for AI models that can perform at a similar level or surpass the GPTs.

These new LFMs are available in three parameter sizes of 1.3B, 3.1B, and 40.3B. The latter is a Mixture of Experts (MoE) model, which means it is made up of various smaller language models and is aimed at tackling more complex tasks. The LFMs are now available on the company’s Liquid Playground, Lambda for Chat UI and API, and Perplexity Labs and will soon be added to Cerebras Inference. Further, the AI models are being optimised for Nvidia, AMD, Qualcomm, Cerebras, and Apple hardware, the company stated.

LFMs also differ significantly from the GPT technology. The company highlighted that these models were built from first principles. The first principles ar essentially a problem-solving approach where a complex technology is broken down to its fundamentals and then built up from there.

According to the startup, these new AI models are built on something called computational units. Put simply, this is a redesign of the token system, and instead, the company uses the term Liquid system. These contain condensed information with a focus on maximising knowledge capacity and reasoning. The startup claims this new design helps reduce memory costs during inference, and increases performance output across video, audio, text, time series, and signals.

The company further claims that the advantage of the Liquid-based AI models is that its architecture can be automatically optimised for a specific platform based on their requirements and inference cache size.

While the clams made by the startup are tall, their performance and efficiency can only be gauged as developers and enterprises begin using them for their AI workflows. The startup did not reveal its source of datasets, or any safety measures added to the AI models.