How LinkedIn benefited from using LLMs for its large user base

Editor

LinkedIn’s project also aimed to improve development efficiency by reducing cycle times through pair programming with LLMs. The company wanted to examine the potential of using LLMs to code at scale by generating high-performing code snippets in areas such as bug fixes, feature additions, and code completion. However, the company found that using LLMs often required rewriting a large proportion of the generated code or selecting substandard output, providing some insights into the challenges organizations may face when implementing similar systems.

The solution LinkedIn implemented involves measuring LLM output quality, accuracy, coherence, relevance, and safety through various stages of the development process. The company developed a system for measuring code quality factors such as Test95, measuring the proportion of snippets passing existing code test cases among successful completions, and human evaluation of relevance, accuracy, and coherence. By optimizing the evaluation processes, LinkedIn aims to provide more efficient feedback loops to its developers, facilitating faster experimentation, and improving overall model performance.

LinkedIn’s project also highlighted challenges with using LLMs at scale, such as high token costs and the lack of automated evaluation capabilities. The company revealed that LLM input and output tokens are priced at $30 and $60 per million tokens, respectively, and that automated evaluation capabilities are still a work in progress. LinkedIn is working on building model-based evaluators to estimate key LLM metrics, including overall quality score, hallucination rate, coherence, and responsible AI violations, to address these challenges and enable faster experimentation in development.

One important aspect of LinkedIn’s project was the need to address the potential ethical concerns associated with implementing large language models in software development. The company emphasized the importance of responsible AI practices and building controls into its development process to ensure the ethical and secure use of LLMs. LinkedIn plans to implement fairness monitoring, privacy controls, data protections, and other safeguards to mitigate risks associated with the use of LLMs in software engineering.

Overall, LinkedIn’s project demonstrates the potential benefits and challenges of using large language models in software development. The company’s efforts to improve development efficiency, optimize evaluation processes, and address ethical concerns highlight the complexities of integrating LLMs into existing workflows. By addressing these challenges and leveraging the capabilities of LLMs effectively, LinkedIn hopes to enhance its software development practices and provide value to its users through innovative AI technologies.

Share This Article
Leave a comment