Solution

What are Pre-trained Models (LLMs)?

Pre-trained models are like ready-to-use tools that have already learned a lot from huge amounts of data. They come pre-equipped to perform a wide range of tasks such as Llama. These models can be incredibly useful because they've done the heavy lifting of learning general patterns and features from massive datasets. You can take these pre-trained models and fine-tune them for new tasks, which saves time and effort compared to starting from scratch. Examples include well-known models like Llama and SDXL, which have been trained extensively for tasks like natural language understanding and image generation. On Hexs, developers can use them as a base model to fine-tune and generate agents which are specialized in preferred niches.

What is Fine-tuning?

Fine-tuning is a process in machine learning where we take a pre-trained model that has already learned a lot from a large dataset and then train it further on a smaller, specific dataset to make it better at a particular task. This helps the model adapt its existing knowledge to perform well on new tasks without starting from scratch. This approach saves time and resources while improving the model's performance on targeted tasks. On Hexs, developers perform fine-tuning to generate new agents which are specialized in preferred niches.

Agents:

Agents in the context of fine-tuning refer to specialized versions of pre-trained language models (LLMs) that are tailored to perform specific tasks more effectively. While a pre-trained LLM is capable of a wide range of tasks, an agent is a refined and specialized version that excels at a particular task due to targeted fine-tuning such as the majority of models in the fine-tuned models section on Hexs.

What is Open-source AI training?

Open-source AI training involves using publicly available resources and tools to train artificial intelligence models. This means that the resources and data used for training AI are accessible to anyone and can be modified or improved by the community like on Hexs. It allows developers and researchers to collaborate, innovate, share knowledge, and build upon existing work like fine-tuning agents to advance AI technology. On Hexs, developers can fine-tune any open-source agent to generate a new agent specialized to a preferred niche.

What is an Agent-based DAO?

Every fine-tuned AI agent's smart contract has a DAO contract associated with it, and all the users participating in collaborative training of that model become a part of this DAO where they also have a decision stake in the future decisions of this AI model. If the model is sold to someone else, then the present owner has the most decision stake yet the previous owners will only be receiving only the royalties. On Hexs, the agent smart contract initializes the Agent-based DAO. The owner becomes the member of that agent's DAO at its initialization and each new user of that finetuned model keeps on becoming its member.

Inferencing

Inferencing in Generative AI involves leveraging a trained model to generate new data based on learned patterns from a dataset. On Hexs, users create new data whether it's text, images, or videos using models available on the marketplace.

Last updated