
MatX is a startup focused on developing specialized hardware for the world's most advanced AI models, particularly large language models (LLMs).
Their mission is to maximize performance for these models by dedicating every transistor to this purpose, offering up to 10 times more computing power than other products. This approach allows AI labs to create models that are significantly smarter and more useful. MatX's product strategy emphasizes cost efficiency for high-volume pre-training and production inference, optimizing for performance-per-dollar first and latency second.
Their chips are designed to support transformer-based models with at least 7 billion parameters, scaling up to the largest models with excellent interconnect capabilities. This enables peak performance for both training and inference, with the potential to accelerate the development of the world's best models by 3-5 years.