· AI at Scale  · 1 min read

Three Phases of Generative AI Development

We are practically in year two of generative AI being in mainstream. Organisations are still figuring out a path to production where they can align the user delight with total cost of ownership along with getting a sizeable return on investment. This post talks about the pattern very prevalent to arrive at the outcome.

We are practically in year two of generative AI being in mainstream. Organisations are still figuring out a path to production where they can align the user delight with total cost of ownership along with getting a sizeable return on investment. This post talks about the pattern very prevalent to arrive at the outcome.

Given this is proving to be the path to production - organisations need to first settle on the idea of a platform. With so many aspects of the development changing - organisations need to identify in the GenAI landscape what aspects of the SDLC remains the same as the patterns, models and libraries evolve. Here are a list of of areas that are worth considering to standardize on.

  • Access to Models - Access to model card, visibility of data used for training, license details, support
  • Prompt Lifecycle Management - Testing, Tuning, Version Management, Regression Testing, Migration between models
  • Model Comparison - Golden Dataset, method of evaluation
  • LLM Hosting - Self Managed vs. Managed service, optimisations to match your load pattern
  • Agent Hosting - Security, logging, observability, version management (see reasoning engine from gcp for e.g. )
  • LLM Observability - Usage and Pricing Transparency (e.g. see genkit from firebase)

By taking a platform first approach, organisations can adapt to the evolving GenAI landscape while minimizing evaluation fatigue, security risks and technical debt.

Share:
Back to Blog

Related Posts

View All Posts »
The theory behind Technical Debt

The theory behind Technical Debt

Technical debt is not new, This weekend I went down the trail to read-up on its impact due to the increased throughput of code generation thanks to AI. Turns out AI code generation is a double-edged sword. Lightning-fast code creation can mask underlying architectural flaws, poor naming, and inadequate testing. Often creating more of the uncoordinated code.The right approach? Thoughtful AI integration that learns from existing code-bases, not blindly generating new code.

Part I - Why do large enterprises need a Chief AI Officer?

Part I - Why do large enterprises need a Chief AI Officer?

Having spoken to a lot of organisations keen on adopting AI at scale - the pattern for systems eventually breaking down is consistent. In this post I make a case for a role of a Chief AI Officer and also talk about how they can be most effective.

Emerging Tech Projects - Avoid these common pitfalls.

Emerging Tech Projects - Avoid these common pitfalls.

Some ideas on what to watchout for enterprise when starting new projects using emerging technology. This post talks about aspects of scoping, ownership and technology changes. Most of it is obvious but less commonly