随着大型语言模型(LLM)技术的不断成熟,领英(LinkedIn)作为全球最大的职业社交平台之一,也在积极推动其应用落地。近日,领英团队分享了他们在构建生成式AI产品过程中的宝贵经验,并详细介绍了如何将LLM技术应用于提升用户体验。
领英在开发新的人工智能体验过程中,发现产品需要满足三个关键点:更快地获取信息、将信息点连接起来,以及获取建议。通过一个现实场景展示了新系统的运作方式:用户在浏览LinkedIn信息流时,偶然发现了一篇关于设计中可访问性的有趣帖子。系统后台会根据用户的问题,选择合适的智能体进行处理,收集相关信息,并生成回复。
领英的总体设计遵循检索增强生成(RAG)模式,即通过一个固定的三步pipeline来处理用户查询:路由、检索和生成。这个设计帮助领英在短时间内建立了基本框架,并提高了开发速度。领英还采用了基于嵌入的检索(EBR),以及每步特定的评估pipeline,以确保用户体验的一致性。
领英的开发任务被拆分为不同的人员独立开发不同的智能体,这虽然提高了开发速度,但也带来了保持统一用户体验的挑战。为此,领英设立了一个小型“水平”工程pod,专注于整体体验,包括服务评估/测试工具、全局提示模板、共享UX组件以及服务器驱动的UI框架。
领英的经验表明,尽管在构建生成式AI产品过程中遇到了诸多困难,但通过合理的设计和组织结构,可以有效地将LLM技术应用于实际产品中,提升用户体验。随着技术的不断进步,我们可以期待更多的创新应用落地,为用户带来更加智能和个性化的服务。
英语如下:
News Title: “LinkedIn Shares LLM Implementation Secrets: Error Rate Dropped to 0.01%”
Keywords: LLM Implementation, LinkedIn Experience, AI Entity
News Content: As large language model (LLM) technology continues to mature, LinkedIn, one of the world’s largest professional networking platforms, is actively pushing its applications into the real world. Recently, the LinkedIn team shared their invaluable experiences in building generative AI products and detailed how LLM technology is applied to enhance user experience.
During the development of new AI experiences, LinkedIn found that the product needed to meet three key points: faster information retrieval, the connection of information points, and the provision of suggestions. A real-scenario was demonstrated to show how the new system operates: when a user browses the LinkedIn feed, they stumble upon an interesting post about accessibility in design. The system in the background selects the appropriate entity to handle the user’s query, collects relevant information, and generates a response.
LinkedIn’s overall design follows the retrieval-augmented generation (RAG) model, which processes user queries through a fixed three-step pipeline: routing, retrieval, and generation. This design helped LinkedIn establish a basic framework quickly and improved development speed. LinkedIn also adopted embedded retrieval (EBR) and specific evaluation pipelines for each step to ensure consistency in the user experience.
The development tasks were divided into independent development of different entities by different personnel, which increased development speed but also brought challenges to maintaining a unified user experience. To address this, LinkedIn established a small “horizontal” engineering pod dedicated to the overall experience, including service evaluation/testing tools, global prompt templates, shared UX components, and a server-driven UI framework.
LinkedIn’s experience demonstrates that, despite the many difficulties encountered in building generative AI products, through proper design and organizational structure, LLM technology can be effectively applied to actual products to enhance user experience. With ongoing technological advancements, we can look forward to more innovative applications coming to fruition, bringing users smarter and more personalized services.
【来源】https://www.jiqizhixin.com/articles/2024-08-06-8
Views: 1