News Sci/Tech AI21 and Databricks Revolutionize AI Efficiency with Compact Models
0
Language model

Published :

AI21 and Databricks Show Open Source Can Radically Slim Down AI

By Tiernan Ray, Senior Contributing Writer

Published on April 2, 2024, 6:36 a.m. PT

Introduction

As the battle between open-source and closed-source generative AI intensifies, AI21 Labs and Databricks have unveiled groundbreaking language models that challenge the dominance of giants like OpenAI and Anthropic. These models demonstrate that smaller neural networks can achieve comparable performance while significantly reducing computational resources.

Jamba: A Fusion of Efficiency

AI21's Jamba combines two distinct approaches: the ubiquitous Transformer, foundational to models like OpenAI's GPT-4, and a novel neural network called a "state space model" (SSM). Researchers at Carnegie Mellon University and Princeton University enhanced the SSM, resulting in "Mamba." By integrating Mamba with the Transformer, they birthed Jamba, which outperforms or matches state-of-the-art models across various benchmarks.

Memory Efficiency

Jamba achieves this feat by slimming down memory usage. With 12 billion parameters, it rivals Meta's open-source Llama 2 model. However, unlike Llama 2, which requires 128GB of DRAM for its attention function, Jamba operates efficiently with just 4GB. The trade-off between attention layers and Mamba layers enables Jamba to fit into a single 80GB GPU.

DBRX: Empowering Custom Models

Databricks introduces DBRX GenAI, a large language model (LLM) that empowers users to build, train, and deploy custom models more affordably. By avoiding reliance on a limited set of closed models, DBRX democratizes AI development.

Efficiency as a Weapon

Efficiency will be the open-source community's secret weapon in the AI arms race. Smaller, resource-efficient models like Jamba and DBRX prove that innovation need not come at the cost of computational extravagance.

Stay tuned as the landscape of generative AI continues to evolve!

  • Reactions

    0
    0
    0
    0
    0
    0

    Your email address will not be published. Required fields are marked *