Fine-tuning a Large Language Model (LLM) coding assistant


Written by

Tomasz Karczewski

Solution Architect

Welcome to the new Red Future Navigators series. Here, our domain experts will address a range forward thinking topics – guiding our customers, partners and peers through the complexities of innovation and the technical challenges of tomorrow.

The developing capabilities of Natural Language Processing (NLP) models have led to a growing abundance of AI powered coding assistant whose popularity is quickly growing.

Developing Large Language Models (LLM) requires huge computational power and there is an inherent limit when it comes to scaling these models by increasing their size or training them for longer periods.

This paper explores the ability to enhance a Large Language Model coding assistant (StarCoder) with a technique called ‘fine-tuning’, to refine its capabilities for a specific code base (Yocto).

Download and read more to learn more about this evaluation, including:

  • Enhancing utility LLMs with fine-tuning
  • Fine-tuning StarCoder for embedded applications
  • An overview of the training preparations, process and results
Fine-tuning a Large Language Model (LLM) coding assistant snapshot


Fine-tuning a Large Language Model (LLM) coding assistant

Fine-tuning, as an approach to the broader problem of transfer learning, is considered by some as a key component in developing Artificial Intelligence systems resembling human reasoning.

While our ambition may not reach that far, fine-tuning could nonetheless prove to be an excellent method for crafting practical development tools of the future

We have over 20 years of experience in trusted innovation and embedded software development, supporting a global customer base.

Contact our expert team for support with your next development project, or the adoption of AI-enabled tooling in your organisation.