![]() The actual meeting is usually about an hour, with coffee and a snack, and is more conversational than presentational. We’ll review any materials, layer in our understanding of your industry and sector, and bring questions that will help us build conviction in your investment. Every company is different, so every process is different, but we get through these milestones with a sense of purpose: Thorough doesn’t mean slow-we understand you have a timeline, too. We get to “Yes” with companies we understand and believe in, which makes us better partners every step of the way. “That’ll make it easy for anyone using Aviary to just deploy those models without having to really do any extra work.Our investment process is grounded in doing the work and understanding your company and its potential. “Our hope with Aviary is, with it being open source, anyone from the community who wants to will be able to just easily add new models,” he said. Overall, Nishihara expects that the number of open-source models is only going to grow and as a result, the problem of selecting models will only become harder for organizations. ![]() “In fact, you just have to handle each of the standard model architectures and then all of the different LLMs fall into one of those categories.” “We don’t have to handle hundreds of special cases,” said Nishihara. He explained that there are a few different standard architectures that all open-source LLMs conform to in terms of how they handle model parallelism and other critical aspects of deployment. Nishihara said that when LightGPT became available, Aviary was able to add support for it in less than five minutes. ![]() Initially it took a bit of time to get the right configuration for any one open-source LLM, but what has become clear is that there are common patterns across all LLMs for deployment. ![]() As new LLMs emerge, Aviary will enable them quicklyĪviary has been in private development at Anyscale for the last three months. The comparisons enabled via Aviary include accuracy, latency and cost. Nishihara said that by making it easier to deploy open-source LLMs, Aviary is also making it easier for organizations to compare different LLMs. With the growing number of models, it’s not easy for anyone to know the best model for a specific use case. Users don’t have to go through a time consuming process of figuring out infrastructure configuration on their own Aviary handles all that for them.Īviary also aims to help solve the challenge of model selection. The goal with Aviary is to have pre-configured defaults for essentially any open-source LLM on Hugging Face. ![]() Nishihara explained that there are many different things that need to be configured on the infrastructure side, including model parallel inference across multiple GPUs, sharding and performance optimizations. The goal with Aviary is to automatically enable users of open source LLMs to deploy quickly with the right optimizations in place. Ray is already widely used by large organizations for model training and is the technology that OpenAI uses for its models including GPT-3 and GPT-4. The Aviary project builds on top of the open-source Ray project with a set of optimizations and configurations to ease LLM deployment of open-source models. Register Now How Aviary works to ease open source LLM deployments ![]()
0 Comments
Leave a Reply. |