Gartner - 10 Best Practices for Scaling Generative AI
I recently came back from Gartner's Data and Analytics Summit in Orlando, Floria. As expected, GenAI was a big area of focus and interest. One of the sessions that I attended was "10 best practices for scaling Generative AI." The session highlighted the rapid adoption of generative AI, with 45% of organizations piloting and 10% already in production as of September 2023. While the benefits like workforce productivity, multi-domain applications, and competitive differentiation are evident, there are also significant risks around data loss, hallucinations, black box nature, copyright issues, and potential misuse. Through 2025, Gartner predicts at least 30% of generative AI projects will be abandoned after proof-of-concept due to issues like poor data quality, inadequate risk controls, escalating costs, or unclear business value. To successfully scale generative AI, the session outlined 10 best practices: Continuously prioritize use cases aligned to the organization's AI ambition and measure business value. Create a decision framework for build vs. buy, evaluating model training, security, integration, and pricing. Pilot use cases with an eye towards future scalability needs around data, privacy, security etc. Design a composable platform architecture to improve flexibility and avoid vendor lock-in. Put responsible AI principles at the forefront across fairness, ethics, privacy, compliance etc. Evaluate risk mitigation tools. Invest in data and AI literacy programs across functions and leadership. Instill robust data engineering practices like knowledge graphs and vector embeddings. Enable seamless human-AI collaboration with human-in-the-loop and communities of practice. Apply FinOps practices to monitor, audit and optimize generative AI costs. Adopt an agile, product-centric approach with continuous updates based on user feedback. The session stressed balancing individual and organizational needs while making responsible AI the cornerstone for scaling generative AI capabilities. Hope you found these useful. What are you thoughts on best practices for scaling GenAI?5.4KViews0likes1CommentEU AI Act - Here's what it means for enterprises
Last week the European Union's passage of a new AI Act. Recognized as one of the strictest AI regulations globally, this development holds significant implications for companies involved in AI, especially in data-intensive sectors like technology, finance, and healthcare. The AI Act introduces a risk-based system to regulate AI applications, categorizing them into unacceptable risk, high-risk, limited risk, or minimal risk. High-risk systems, like those used in critical areas such as healthcare or transportation, will now face stringent requirements around training data, human oversight mechanisms, and security safeguards. Key Provisions of the Act Prohibited Practices Certain high-risk AI uses will be banned outright, such as: Systems enabling biometric categorization based on sensitive traits like ethnicity or gender Indiscriminate 'real-time' facial recognition for mass surveillance in public spaces AI that exploits vulnerabilities of specific groups like children for commercial gain High-risk Systems For "high-risk" AI systems with potential impacts on health, safety, rights, or democracy, the Act mandates: Comprehensive risk assessments and mitigation measures Detailed documentation on capabilities, limitations, and risks Appropriate human oversight and control processes High standards for privacy, security, accuracy, and resilience Limits on Law Enforcement Use of 'real-time' remote biometric identification in public areas by law enforcement will be restricted, with narrowly defined exceptions permitted only for serious crimes under strict anti-bias safeguards. Transparency Obligations All AI systems must adhere to transparency requirements, including: Clear disclosure to users on system capabilities and limitations Respecting intellectual property rights like copyrighted training data Enabling monitoring by allowing information logging Comprehensive documentation and reporting The Act also enables regulatory sandboxes for real-world AI testing, especially benefiting SMEs and startups. While the Act is still being finalized, here are the key timelines: Early/Mid-2024 entry into force: General prohibitions and obligations start taking effect. Twenty days after entry into force: Title I (General provisions) and Title II (Prohibited AI practices) become applicable. This means businesses need to be aware of banned AI uses. 12 Months Post-Enforcement: High-risk and general-purpose AI systems must comply with new regulations. 24 Months Later: Wider enforcement of the Act, giving companies time to adjust. So, what does this mean for the enterprise AI landscape? A few major implications: Higher Cost & Complexity: Stringent rules will increase costs and complexity for deploying high-risk AI systems across industries like healthcare. Demand for AI Governance: Demand will surge for AI governance solutions to assist with mandated risk assessments, documentation, auditing, and compliance activities. A slower pace of innovation: Prioritizing safety could slow rapid AI innovation cycles in the EU versus other regions. Strategic Adjustments: Companies may re-label or re-frame AI capabilities to avoid the high-risk designation and all those new rules. For example, removing the "AI" label from products even if they use machine learning under the hood. An advantage for enterprise AI platforms In this new environment, enterprises may flock to end-to-end AI platforms that can provide comprehensive AI governance, security, monitoring, and auditing, as opposed to trying to stitch together point solutions. Competitive advantage for first movers Companies that quickly adapt to these new regulations will have a competitive edge. They'll be seen as leaders in responsible AI usage, attracting customers and partners who value ethical practices. SnapLogic platform can help you meet some of the AI transparency requirements through visual data flow demonstrations, auto-documentation, detailed execution logs, and stringent access controls for your pipelines and workflows. How is your company preparing for this regulated future of AI? Please share your thoughts.1.4KViews0likes0Comments