In this episode, we discuss Mixtral of Experts by Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. Mixtral 8x7B is a Sparse Mixture of Experts (SMoE) language model, building on Mistral 7B's architecture with 8 experts per layer, among which two experts are selected per token for processing, allowing access to 47B parameters but using only 13B actively. It excels in benchmarks, surpassing Llama 2 70B and GPT-3.5, especially in areas like math, code generation, and multilingual tasks. A special instruction-following version called Mixtral 8x7B – Instruct also outperforms leading models, with both models being open-sourced under the Apache 2.0 license.
view more