サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
画力アップ
mistral.ai
Large EnoughToday, we are announcing Mistral Large 2, the new generation of our flagship model. Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning. It also provides a much stronger multilingual support, and advanced function calling capabilities. This latest generation continues to push the boundaries of cost efficiency, speed,
Mistral NeMoMistral NeMo: our new best small model. A state-of-the-art 12B model with 128k context length, built in collaboration with NVIDIA, and released under the Apache 2.0 license. Today, we are excited to release Mistral NeMo, a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy ar
Codestral MambaAs a tribute to Cleopatra, whose glorious destiny ended in tragic snake circumstances, we are proud to release Codestral Mamba, a Mamba2 language model specialised in code generation, available under an Apache 2.0 license. Following the publishing of the Mixtral family, Codestral Mamba is another step in our effort to study and provide new architectures. It is available for free use
Codestral: Hello, World!Empowering developers and democratising coding with Mistral AI. We introduce Codestral, our first-ever code model. Codestral is an open-weight generative AI model explicitly designed for code generation tasks. It helps developers write and interact with code through a shared instruction and completion API endpoint. As it masters code and English, it can be used to design ad
Cheaper, Better, Faster, StrongerContinuing to push the frontier of AI and making it accessible to all. Mixtral 8x22B is our latest open model. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Mixtral 8x22B comes with the
Mistral AI continues its mission to deliver the best open models to the developer community. Moving forward in AI requires taking new technological turns beyond reusing well-known architectures and training paradigms. Most importantly, it requires making the community benefit from original models to foster new inventions and usages. Today, the team is proud to release Mixtral 8x7B, a high-quality
Our models strike an unmatched latency to performance ratio, and achieve top-tier reasoning performance on all common benchmarks. We designed our models to be as unbiased and useful as possible, providing full modular control over moderation. We have shipped the most capable open models to accelerate AI innovation. Through our own independence, our endpoints and platform are portable across clouds
Mistral AI team is proud to release Mistral 7B, the most powerful language model for its size to date. Mistral 7B in shortMistral 7B is a 7.3B parameter model that: Outperforms Llama 2 13B on all benchmarksOutperforms Llama 1 34B on many benchmarksApproaches CodeLlama 7B performance on code, while remaining good at English tasksUses Grouped-query attention (GQA) for faster inferenceUses Sliding Wi
このページを最初にブックマークしてみませんか?
『Mistral AI | Frontier AI in your hands』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く