6-Week Intensive Programme

Building
AI-First
Technical Consultants

From code to client — a hands-on workshop for the next generation of technical consultants.

12 Sessions Hands-On Claude Code Client-Facing

The Problem

The gap between knowing how to code and delivering value to clients is where careers stall.

97% of developers use AI tools but only at surface level
61.9% retention rate when learning through teaching others
#1 skill for presales: translating features into business value

The Outcome

By week 6, every intern can

01

Build with AI

Use Claude Code as a core development workflow — from scaffolding to deployment to debugging.

02

Master the Tools

Deeply understand the product portfolio (JFrog + DevOps ecosystem) and architect real solutions.

03

Win the Room

Run discovery calls, handle objections, and deliver compelling client presentations under pressure.

04

Think Like a Consultant

Translate technical features into business outcomes. Know what the client needs before they do.

Programme Design

Four proven methods, one hybrid

Backbone

Spiral Build

One deep project builds week over week. Every skill spirals back with more sophistication. Produces a real portfolio piece.

Delivery

Flipped Classroom

Zero lectures in sessions. All knowledge transfer is async. Sessions are 100% practice, present, and feedback.

Breadth

Case Study Gauntlet

Three short case studies as homework build pattern recognition across different client types and scenarios.

Retention

Peer Teaching

Interns teach each other what they've learned. Peer tutors retain 61.9% of material — the highest-retention method in learning science.

Format

How each week works

Teams of 3-4 are each assigned a realistic client scenario they build over 6 weeks — researching, building, and pitching a real solution.

Async Pre-Work
3-4 hrs/week

Videos, docs, Claude Code tutorials, case study homework. All knowledge transfer happens here.

Entry ticket at session start proves completion
Session 1: Build Lab
1 hour

Hands-on building with Claude Code. Guided early, independent later. Team-based.

5 min ticket + 55 min hands-on
Session 2: Present + Critique
1 hour

Teams present their work. Structured peer feedback. Facilitator debrief.

Presentations grow from 3 min to 15 min over 6 weeks

The Journey

6 weeks at a glance

W1
Foundations
Claude Code + First Deploy
W2
Tool Mastery
JFrog Deep Dive
W3
Discovery
Client Needs Analysis
W4
Advanced AI
Claude Code Power User
W5
Storytelling
Features to Value
W6
The Final
Live Client Pitch
Technical Depth
AI Fluency
Client Skills

Darker = primary focus that week. All tracks active every week.

Week 1

Foundations — Claude Code + Your First Deploy

Pre-Work

  • Set up Claude Code, complete official tutorial
  • Containerization basics refresher
  • Read: "What does a solutions engineer do?"
  • Exercise: Scaffold a web app with Claude Code

Build Lab

Build and deploy a containerized app end-to-end using Claude Code. Facilitator demos the first 10 min (thinking aloud), then interns diverge.

Present + Critique

3-min individual presentations: "Explain your architecture to a non-technical stakeholder." Peer scoring on clarity, accuracy, engagement.

Case Study #1 Assigned

"StartupCo needs CI/CD" — simple, clear requirements. Build a POC, write a recommendation.

Week 2

Tool Mastery — JFrog Deep Dive

Pre-Work

  • JFrog Artifactory self-paced tutorial
  • Set up free tier, push artifacts manually
  • JFrog competitive landscape overview
  • Watch a presales demo recording — note what works

Build Lab

Integrate JFrog Artifactory into the Week 1 app's CI/CD pipeline using Claude Code. Facilitator demos initial config, then teams build.

Present + Critique

5-min team pitch: "Pitch JFrog to a CTO who manages artifacts manually." Facilitator plays the skeptical CTO. Peer scoring on value articulation and objection handling.

Case Study #1 Debrief

Pairs swap recommendations and critique each other's work in written peer review.

Week 3

Client Discovery — What They Actually Need

Pre-Work

  • SPIN Selling summary (situation, problem, implication, need-payoff)
  • Watch: discovery call recordings (good and bad)
  • Prepare 10 discovery questions for your project client

Build Lab

Mock discovery calls. Facilitator plays the project client (10 min per team). Client reveals constraints not in the brief. Other teams observe and note what worked.

Present + Critique

7-min team presentations: "Here's what changed in our solution after discovery, and why." Must show: assumption → insight → revised approach.

Case Study #2 Assigned

"MidCorp's Broken Pipeline" — debugging scenario with a frustrated client. Diagnose from logs using Claude Code, write a root-cause analysis + fix plan.

Week 4

Advanced AI — Claude Code as a Power Tool

Pre-Work

  • Advanced Claude Code: MCP servers, slash commands, CLAUDE.md
  • Read: "AI-first development anti-patterns"
  • Document every prompt you use and rate its effectiveness
  • Case Study #2 peer feedback exchange

Build Lab

"AI Speed Build" challenge. All teams get the same new requirement. Race to implement with Claude Code — judged on quality, not just speed. Facilitator flags anti-patterns in real-time.

Present + Critique

7-min technical deep-dive: "Here's what we automated with Claude Code and the ROI." Must include one thing built WITHOUT AI and why.

Case Study #3 Assigned

"Enterprise evaluates JFrog vs competitor" — competitive positioning brief. Pure analysis, no building.

Week 5

Storytelling + Rehearsal

Pre-Work

  • Vignette Method: Situation → Complication → Resolution
  • Analyse: 1 great pitch vs 1 terrible pitch
  • Draft full project presentation outline
  • Anticipate 5 hard client questions + draft responses

Build Lab — Dress Rehearsal Round 1

Full project pitch (10 min per team). Facilitator interrupts in real-time: "Stop — that slide is confusing. Redo." Focus on narrative arc, demo flow, value articulation.

Present + Critique

"Teach One Thing" session (30 min): Each team teaches a concept they learned deeply. Then Dress Rehearsal Round 2 with senior staff as client panel + tough Q&A.

Case Study #3 Debrief

Competitive positioning briefs are posted to a shared channel. Each intern comments on two others — building peer review habits.

Week 6

The Final Engagement

Session 1 — The Curveball

Each team receives a last-minute scenario change:

  • "The client's CTO just left. The new CTO is non-technical."
  • "Budget was cut 40%. Justify the reduced scope."
  • "A competitor just matched your key differentiator."

45 minutes to adapt everything.

Session 2 — The Final Pitch

15 min per team (10 min presentation + 5 min Q&A). Judged by facilitator + senior staff + leadership.

Scoring Rubric

25% Technical Quality
25% AI-First Approach
25% Client Value
25% Presentation Quality

Awards + Retrospective

Recognise: best overall, best technical depth, best presenter, most improved. Group retro: what sticks, what would you change?

Breadth Track

3 case studies build pattern recognition

While the project gives depth, these short homework cases expose interns to diverse client scenarios.

W1-W2

"StartupCo Needs CI/CD"

Simple, clear requirements. Friendly client. Build a POC, write a recommendation.

Skill: Solution design basics
W3-W4

"MidCorp's Broken Pipeline"

Debugging scenario. Frustrated client. Diagnose from logs, propose fix + prevention plan.

Skill: Rescue & communication under pressure
W4-W5

"Enterprise Bake-Off"

JFrog vs competitor. No building — pure analysis, positioning, and persuasion.

Skill: Competitive strategy

Communication Arc

Presentations grow every week

W1
3 min — Individual — Explain architecture to non-technical person
W2
5 min — Team — Pitch to skeptical CTO
W3
7 min — Team — Revised solution after discovery
W4
7 min — Team — Technical deep-dive demo
W5
10 min — Team — Full dress rehearsal + Q&A
W6
15 min — Team — Final pitch with curveball + Q&A

Evidence Base

Why this design works

Distributed Practice

1-hour sessions outperform intensive workshops. Postal worker study: distributed learners were faster and more accurate.

Flipped Classroom

Meta-analysis of 53 STEM studies: significant positive impact, especially for short-duration interventions.

Peer Teaching

Tutors retain 61.9% vs 28.3% for control groups. The "teach one thing" session leverages this directly.

Whole-Task Strategy

Complete, real-world scenarios outperform decomposed sub-problems. Every session is a real case, not an exercise.

Spiral Curriculum

Bruner: revisiting topics with increasing complexity produces deeper understanding than linear coverage.

AI + Fundamentals

Research shows developers who use AI for inquiry outperform those who delegate blindly. We train both: when to use AI and when not to.

Ready?

Let's build the next generation of technical consultants.

6 weeks. 12 sessions. One transformative journey.

12 Sessions
3 Case Studies
1 Project
1 Final Pitch