The Lossless Group
Fine Tuners 6 tools
Timeline
The Fastest AI Inference Platform & Hardware

The Fastest AI Inference Platform & Hardware

Discover SambaNova - the complete AI platform delivering the fastest AI inference, fine-tuning, and scalable solutions with a GPU alternative built for enterprise and agentic AI. Try it today on SambaCloud.

An array framework for Apple silicon

MLX is a NumPy-like array framework designed for efficient and flexible machine learning on Apple silicon, brought to you by Apple machine learning research.

Repo Prompt

Repo Prompt

A macOS native app designed to remove all the friction involved in iterating on your code with the most powerful AI models.

A faster way to build and share data apps

A faster way to build and share data apps

Turn your data scripts into shareable web apps in minutes.All in pure Python. No front‑end experience required.

We make fine-tuning accessible, scalable, fun.

We make fine-tuning accessible, scalable, fun.

Scale and customize LLMs across different AI models using our *free* open-source solutions

Open Source Fine-Tuning for LLMs

Open Source Fine-Tuning for LLMs

We're making AI training easier for everyone. Faster hardware is getting harder to make. Armed with our maths and coding expertise, we specialize in optimizing AI and ML workloads.