Hugging Face and Intel – Driving Towards Practical, Faster, Democratized and Ethical AI solutions.
Transformer models are the powerful neural networks that have become the standard for delivering advanced performance behind these innovations. But there is a challenge: Training these deep learning models at scale and doing inference on them requires a large amount of computing power. This can make the process time-consuming, complex, and costly.
Today we will talk about all kinds of issues around accessible, production level AI solutions. We also talk about ethical questions around AI usage and why open, democratized AI solutions are important.
Learn more:
Hugging Face
https://huggingface.co
Hugging Face Hub
https://huggingface.co/models
Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator
https://huggingface.co/blog/habana-gaudi-2-bloom
Accelerating Stable Diffusion Inference on Intel CPUs
https://huggingface.co/blog/stable-diffusion-inference-intel
Transformer Performance with Intel & Hugging Face Webinar
https://www.intel.com/content/www/us/en/developer/videos/optimize-end-to-end-transformer-model-performance.html#gs.pomt5k
Intel Explainable AI Tools
https://github.com/IntelAI/intel-xai-tools
Intel Distribution of OpenVINO Toolkit
https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html
Intel AI Analytics Toolkit (AI Kit)
https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html
Guests:
Julien Simon – Chief Evangelist @ Hugging Face
Ke Ding – Principal Engineer @ Intel
Create your
podcast in
minutes
It is Free