6-Week Intensive Programme
From code to client — a hands-on workshop for the next generation of technical consultants.
The Problem
The Outcome
Use Claude Code as a core development workflow — from scaffolding to deployment to debugging.
Deeply understand the product portfolio (JFrog + DevOps ecosystem) and architect real solutions.
Run discovery calls, handle objections, and deliver compelling client presentations under pressure.
Translate technical features into business outcomes. Know what the client needs before they do.
Programme Design
One deep project builds week over week. Every skill spirals back with more sophistication. Produces a real portfolio piece.
Zero lectures in sessions. All knowledge transfer is async. Sessions are 100% practice, present, and feedback.
Three short case studies as homework build pattern recognition across different client types and scenarios.
Interns teach each other what they've learned. Peer tutors retain 61.9% of material — the highest-retention method in learning science.
Format
Teams of 3-4 are each assigned a realistic client scenario they build over 6 weeks — researching, building, and pitching a real solution.
Videos, docs, Claude Code tutorials, case study homework. All knowledge transfer happens here.
Hands-on building with Claude Code. Guided early, independent later. Team-based.
Teams present their work. Structured peer feedback. Facilitator debrief.
The Journey
Darker = primary focus that week. All tracks active every week.
Build and deploy a containerized app end-to-end using Claude Code. Facilitator demos the first 10 min (thinking aloud), then interns diverge.
3-min individual presentations: "Explain your architecture to a non-technical stakeholder." Peer scoring on clarity, accuracy, engagement.
"StartupCo needs CI/CD" — simple, clear requirements. Build a POC, write a recommendation.
Integrate JFrog Artifactory into the Week 1 app's CI/CD pipeline using Claude Code. Facilitator demos initial config, then teams build.
5-min team pitch: "Pitch JFrog to a CTO who manages artifacts manually." Facilitator plays the skeptical CTO. Peer scoring on value articulation and objection handling.
Pairs swap recommendations and critique each other's work in written peer review.
Mock discovery calls. Facilitator plays the project client (10 min per team). Client reveals constraints not in the brief. Other teams observe and note what worked.
7-min team presentations: "Here's what changed in our solution after discovery, and why." Must show: assumption → insight → revised approach.
"MidCorp's Broken Pipeline" — debugging scenario with a frustrated client. Diagnose from logs using Claude Code, write a root-cause analysis + fix plan.
"AI Speed Build" challenge. All teams get the same new requirement. Race to implement with Claude Code — judged on quality, not just speed. Facilitator flags anti-patterns in real-time.
7-min technical deep-dive: "Here's what we automated with Claude Code and the ROI." Must include one thing built WITHOUT AI and why.
"Enterprise evaluates JFrog vs competitor" — competitive positioning brief. Pure analysis, no building.
Full project pitch (10 min per team). Facilitator interrupts in real-time: "Stop — that slide is confusing. Redo." Focus on narrative arc, demo flow, value articulation.
"Teach One Thing" session (30 min): Each team teaches a concept they learned deeply. Then Dress Rehearsal Round 2 with senior staff as client panel + tough Q&A.
Competitive positioning briefs are posted to a shared channel. Each intern comments on two others — building peer review habits.
Each team receives a last-minute scenario change:
45 minutes to adapt everything.
15 min per team (10 min presentation + 5 min Q&A). Judged by facilitator + senior staff + leadership.
Recognise: best overall, best technical depth, best presenter, most improved. Group retro: what sticks, what would you change?
Breadth Track
While the project gives depth, these short homework cases expose interns to diverse client scenarios.
Simple, clear requirements. Friendly client. Build a POC, write a recommendation.
Debugging scenario. Frustrated client. Diagnose from logs, propose fix + prevention plan.
JFrog vs competitor. No building — pure analysis, positioning, and persuasion.
Communication Arc
Evidence Base
1-hour sessions outperform intensive workshops. Postal worker study: distributed learners were faster and more accurate.
Meta-analysis of 53 STEM studies: significant positive impact, especially for short-duration interventions.
Tutors retain 61.9% vs 28.3% for control groups. The "teach one thing" session leverages this directly.
Complete, real-world scenarios outperform decomposed sub-problems. Every session is a real case, not an exercise.
Bruner: revisiting topics with increasing complexity produces deeper understanding than linear coverage.
Research shows developers who use AI for inquiry outperform those who delegate blindly. We train both: when to use AI and when not to.
Ready?
6 weeks. 12 sessions. One transformative journey.