news pappernews papper

As the world continues to embrace the power of artificial intelligence, the debate over the necessity of cloud-based AI services like GitHub Copilot is gaining momentum. With the advent of local AI code assistants, the question arises: When you can run AI code assistance right on your desktop, who needs GitHub Copilot anymore?

The Popularity of AI Code Assistants

AI code assistants have garnered significant attention as an early use case for generative AI, especially since Microsoft launched GitHub Copilot. However, for those who are uncomfortable with Microsoft handling their code or不愿意 to pay a monthly fee of $10, building your own AI assistant might be the solution.

Microsoft’s Lead and the Alternatives

Microsoft was among the first companies to commercialize AI code assistants and integrate them into Integrated Development Environments (IDEs). However, they are far from being the only option. There are several large language models (LLMs) trained specifically for code generation, and chances are, your current computer can run these models.

Enter Continue

This is where applications like Continue come into play. Continue is an open-source code assistant designed to be embedded into popular IDEs such as JetBrains or Visual Studio Code. It connects to popular LLM runners like Ollama, Llama.cpp, and LM Studio, which might already be familiar to users.

Like other popular code assistants, Continue supports code completion and generation, and can optimize, comment, or refactor code for different use cases. Additionally, Continue offers an integrated chatbot with Retrieval-Augmented Generation (RAG) functionality, allowing users to effectively communicate with their codebase.

Setting Up Your Local AI Code Assistant

To get started with Continue, you need a machine capable of running standard LLMs. A system with a relatively new processor will suffice, but for optimal performance, it is recommended to use GPUs from Nvidia, AMD, or Intel with at least 6GB of vRAM. For Mac users, any Apple Silicon system, including the original M1, should work, but for the best results, at least 16GB of RAM is suggested.

The guide assumes you have already installed and run the Ollama model runner on your machine. If not, you can follow their quick start guide, which should have you up and running in about ten minutes. There is also a guide for deploying Ollama using Intel Integrated or Arc GPUs.

Integration and Telemetry Concerns

Continue supports JetBrains and Visual Studio Code, and for those looking to avoid Microsoft’s telemetry, VSCodium, built by the open-source community, is a viable alternative. To deploy Continue in VSCodium, you simply need to open the IDE, access the extension management panel, search for and install Continue.

After installation, Continue’s setup wizard will guide you through choosing whether to host models locally or use another provider’s API. In this example, we will use Ollama to host our models locally. This configuration allows Continue to use several pre-installed models, such as Llama 3 8B, Nomic-embed-text, and Starcoder2:3B.

However, it’s worth noting that Continue collects anonymous telemetry data by default, including acceptance or rejection of suggestions, model names and commands used, the number of tokens generated, and the names of the operating system and IDE. Users who wish to opt-out of data collection can modify the .continue file in their home directory or uncheck the Continue: Telemetry Enabled box in VS Code settings.

The Question of Effectiveness

With the setup complete, users can delve into the integration of Continue into their workflow. However, the effectiveness of these local AI code assistants remains a point of contention. While they offer the convenience of not relying on cloud services and the privacy of keeping code local, the quality and reliability of suggestions can vary.

Conclusion

The rise of local AI code assistants like Continue poses a significant challenge to cloud-based services like GitHub Copilot. As users seek more control over their data and costs, the demand for local solutions is likely to grow. Whether these local assistants can match the performance and convenience of GitHub Copilot remains to be seen, but for many developers, the option to build and run their own AI code assistant is an appealing prospect.


read more

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注