If you're searching for "how to structure a media buying team," you're probably not actually looking for an org chart.
You're trying to solve one of these problems:
Scale ad spend without losing control. Your CPAs are spiking, reporting has become a mess, and the team ships slower as it grows.
Stop depending on hero buyers. You've got one or two people who keep everything in their head, and if they leave, you're in trouble.
Build a creative engine that actually finds winners. Right now you're hoping the next batch of ads hits instead of systematically discovering what works.
Make performance marketing measurable again. Attribution is messy by default, and nobody trusts the numbers anymore.
Create a system that survives turnover. New hires shouldn't break everything when they start.
A well-structured media buying team solves these problems. Everyone knows what they own. Decisions happen fast without random people vetoing things late. Experiments are designed cleanly, launched correctly, and learned from. Creative production and media execution work as one connected loop, not two silos. And reporting is trusted because tracking standards are actually enforced.
This guide is built for 2026 reality: platforms automate more, targeting signals are weaker, and creative quality plus iteration speed decide who wins. That's not motivational talk. It's visible in the direction of the industry and the tooling platforms are shipping. Nielsen expanded their campaign effectiveness suite to include creative evaluation. Kantar, Meta, and CreativeX published research on digital creative quality gaps. TikTok's 2024 World event showcased TikTok One and Symphony, their creative AI tooling.
Creative isn't optional garnish anymore. It's the primary performance lever.
What Does a Media Buying Team Actually Do?
A media buying team is an uncertainty-handling machine.
You're doing three things on repeat:
1. Creating hypotheses about what will cause more profitable conversions (offer, angle, hook, proof, format, landing page, audience, placement, bid strategy)
2. Running controlled experiments in noisy environments (auction dynamics, seasonality, creative fatigue, learning phases, tracking gaps)
3. Allocating budget toward what's most likely to produce incremental profit (not just what looks good in a dashboard)
Everything else exists to support those three activities. Campaign builds, naming conventions, UTMs, previews, approvals, dashboards are all infrastructure for hypothesis testing, experimentation, and smart budget allocation.
So your team structure should map to four critical loops:
If you don't assign owners to each loop, you'll get lots of activity and meetings, but very little learning, unstable performance, and constant arguments about numbers.
Why Adding More Media Buyers Doesn't Fix Your Problems
Performance plateaus. Leadership says "we need another media buyer." You hire one. Output goes up briefly. Then chaos returns.
Why? The bottleneck was never the number of buyers.
The real bottlenecks are usually:
→ Creative throughput
→ QA and operational consistency
→ Measurement trust
→ Decision rights and prioritization
→ Feedback loops between creative and media
Your structure should attack the true bottleneck, not just add more people.
4 Media Buying Team Structures (Which One Works for You?)
There's no single correct structure. There are tradeoffs. Pick the structure that matches your complexity and your competitive advantage.
1. Channel Pods (Meta Team, TikTok Team, Google Team)
Best when: Channels are truly different businesses (different creative language, measurement approaches, landing pages, or product lines)
| Aspect | Details |
|---|---|
| Pros | Deep platform expertise; faster iteration inside each channel; clear accountability by channel |
| Cons | Silo risk (creative learnings don't transfer); duplicated work in analytics and reporting; teams compete for same creative resources |
| Use this when | You're big enough that each channel is a standalone P&L lever |
2. Funnel Pods (Prospecting vs Retargeting vs Retention)
Best when: You have a clear funnel model and different creative plus messaging per stage
| Aspect | Details |
|---|---|
| Pros | Messaging consistency per stage; clear creative briefs and KPIs; strong lifecycle thinking |
| Cons | Channels can blur (TikTok might do both); ownership clashes when platforms do "everything" campaigns |
| Use this when | Your growth model is funnel-structured and your team is disciplined about definitions |
3. Market Pods (US, UK, EU, LATAM)
Best when: Localization and market nuance matter (language, offers, creators, regulations, seasonality)
| Aspect | Details |
|---|---|
| Pros | Faster localization decisions; better cultural creative fit; cleaner ownership across markets |
| Cons | Harder to standardize ops; measurement fragmentation if UTMs and naming differ |
| Use this when | Multi-market scaling is the main complexity you're managing |
4. Center of Excellence + Embedded Execution (Hybrid)
A small central team sets standards (naming, UTMs, measurement, experimentation rules), while embedded teams execute for product lines or markets.
| Aspect | Details |
|---|---|
| Pros | Standards stay consistent; execution stays close to the business; scales cleanly |
| Cons | Requires strong central leadership; easy to become bureaucracy if CoE overreaches |
| Use this when | You're scaling, multi-team, and want consistency without slowing down |
Why Creative Quality Beats Media Buying Tactics in 2026
A lot of org charts still treat creative like a service desk where media "requests assets."
That's structurally wrong in modern paid social.
Major research from Kantar, Meta, and CreativeX (published November 2024) found significant quality gaps in digital creative execution and reported major lifts tied to specific creative features like human presence, visual dynamism, and fast storytelling. Adoption rates across campaigns were surprisingly low.
Separately, measurement companies are literally productizing creative evaluation as part of outcomes measurement. That's Nielsen's announcement from 2024, which signals that creative isn't optional.
And TikTok's product roadmap at TikTok World 2024 was heavily creative-centric, including TikTok One and TikTok Symphony for creative AI tooling.
Your structure should make it easy to:
→ Translate performance data into creative briefs quickly
→ Produce lots of variations without losing brand consistency
→ Launch them fast using bulk upload workflows
→ Learn and iterate
That means creative strategy is not nice-to-have. It's a core function.
7 Essential Media Buying Team Roles (And What Each Actually Does)
Below is the cleanest way to define roles. Focus on what they produce, not what they do.
1. Head of Growth / Performance Marketing Lead
Owns: Business results, budget allocation, prioritization, hiring plan, standards
Outputs:
• Clear targets (CAC, MER, payback, LTV:CAC, margin)
• Budget strategy and guardrails
• Channel and creative strategy alignment
• A cadence that forces learning (not opinions)
Red flags:
• Spends most time in Ads Manager
• No written strategy or guardrails
• Teams fight over attribution weekly
2. Media Strategy Lead (Optional Early, Mandatory Later)
This person turns business goals into a testing roadmap.
Outputs:
• Quarterly testing thesis (what are we trying to prove?)
• Experiment roadmap and prioritization
• Scaling rules (when to increase budget, when to kill)
Red flags:
• "Strategy" is just slide decks
• No link between tests and decisions
3. Channel Media Buyer (Meta, TikTok, Google, Etc.)
Owns: Day-to-day performance, experiments, and spend decisions within guardrails
Outputs:
• Clean experiments (hypothesis, variable, success metric)
• Pacing and budget changes logged
• Creative feedback that is specific, not vibes
• Stable scaling without resetting learning unnecessarily
Red flags:
• Constantly changes multiple variables at once
• No written learnings
• Can't explain why performance changed
4. Creative Strategist (The Bridge Between Data and Persuasion)
This role exists because creative and media too often become silos. A creative strategist explicitly serves as the connector between performance data and creative execution.
Outputs:
• Creative brief that is testable (hook, claim, proof, angle, format)
• Creative backlog prioritized by expected impact
• Post-launch analysis translated into "next creative instructions"
• Creative sprint rhythm
Red flags:
• Briefs are generic ("make it punchier")
• No mapping between creative elements and performance outcomes
5. Creative Production (Designer, Editor, UGC Coordinators, Copywriter)
Owns: Turning briefs into assets fast, at quality
Outputs:
• On-time delivery of variations (with correct specs)
• Consistent brand constraints
• A reusable library of patterns and building blocks
Red flags:
• Every asset is bespoke and slow
• No version control or naming consistency
6. Marketing Analyst / Measurement Lead
Owns: The truth layer
In a world where last-click can be wildly misleading, this function becomes a growth unlock. Research shows regression-based attribution approaches can produce dramatically different ROI conclusions than last-click in multi-channel analysis.
You don't need to accept the exact number to accept the lesson: measurement method changes decisions.
Outputs:
• Single source of truth dashboards
• Attribution sanity checks (platform vs GA vs backend)
• Lift testing or MMM roadmap (when scale requires it)
• Clear definitions: what is counted, where, and why
Red flags:
• "Reporting" is screenshots from ad platforms
• No documented tracking taxonomy (UTMs, naming, events)
7. Ad Ops / Launch Engineer (Often Ignored, Always Expensive When Missing)
This is the role that prevents "we spent $200,000 on broken UTMs."
Outputs:
• Correct builds, QA, trafficking standards
• Naming and UTM enforcement
• Asset QA (correct aspect ratios, safe zones, landing pages)
• Fast, repeatable launches
Red flags:
• Buyers spend half their day building and tagging
• Constant tracking errors
• Slow time from "creative ready" to "live"
In high-volume teams, ad ops is the difference between "we ship tests" and "we talk about tests."
How Many People Do You Need on Your Media Buying Team?
Forget spend thresholds. They fail across business models.
Use complexity thresholds instead:
| Complexity Factor | Questions to Ask |
|---|---|
| Creative velocity | How many new creatives per week? |
| Campaign scale | How many active campaigns or ad sets? |
| Market diversity | How many markets or languages? |
| QA risk | How much QA risk (regulated industry, strict brand rules)? |
| Attribution complexity | How broken is attribution? |
| Launch efficiency | How long does launch take per batch? |
Stage 0: 1 Person (Founder or Growth Generalist)
Reality: You cannot do everything well. Pick your advantage.
Best structure:
① 1 generalist runs ads and creative strategy
② Contractors handle editing and design
③ You document standards early (naming, UTMs)
Non-negotiable:
• A written testing log (even a spreadsheet)
Stage 1: 2-3 People (First Real Team)
Goal: Create a repeatable weekly cycle
Org chart:
• Growth Lead
- Media Buyer (Meta + TikTok)
- Creative producer (in-house or freelance)
- Analyst (part-time or fractional)
Key design choice:
Do you want the media buyer to also "own creative strategy"? If yes, you still need a formal creative briefing process or creative output collapses.
Stage 2: 4-7 People (The First Scalable Setup)
Goal: Separate thinking from doing
Org chart:
• Head of Growth
- Paid Social Lead
- Media Buyer (Meta)
- Media Buyer (TikTok)
- Ad Ops / Launch
- Creative Strategist
- Editor/Designer (or a pod of freelancers)
- Analyst / Measurement (fractional or full-time depending on complexity)
Why this works:
Buyers stay in optimization and learning. Creative strategist keeps briefs tight and feedback loops fast. Ad ops protects data quality.
Stage 3: 8-15 People (Multi-Pod Scale)
Goal: Maintain speed while adding governance
Two strong options:
① Channel pods + shared creative and measurement
② Market pods + shared ad ops standards
At this stage, you also need:
• A documented career ladder (junior buyer → buyer → senior → lead)
• A central naming + UTM standard enforced everywhere
• A QA process that catches mistakes before spend
Stage 4: 15+ People (Enterprise Complexity)
Goal: Avoid bureaucracy while preventing chaos
Use the CoE + embedded model:
Center team owns:
• Measurement standards
• Naming and UTM taxonomies
• Experimentation framework
• Tooling
Embedded pods own:
• Weekly execution
• Creative sprints
• Channel performance
This is also where "creative ops" becomes an actual function, not an afterthought. Many teams still lack dedicated creative operations, and that gap shows up as missed deadlines, asset confusion, and approval bottlenecks.
How to Structure Creative and Media Teams to Work Together
Most "media buying team" posts ignore the creative workflow, which is why they're not that useful.
A modern paid social team should be structured so creative production is not an external dependency.
Your structure should make these true:
→ Creative gets specific briefs with testable hypotheses
→ Media gets consistent supply of variations
→ Both share the same truth layer (measurement)
→ Learnings update the next briefs within days, not weeks
The Creative Brief Template (Copy/Paste)
Use this for every concept:
This is how you stop creative from being random.
How to Set Decision Rights on Your Media Buying Team
Most media teams are slow because nobody knows:
• Who can launch without approval
• Who can scale budgets
• Who can kill an experiment
• Who owns naming and tracking standards
Create three levels of decisions:
Level 1: Guardrails (Set by Head of Growth)
Examples:
• Max daily budget increase without approval
• Minimum test budget per hypothesis
• Risk rules (brand safety, compliance)
• KPI definitions
Level 2: Local Autonomy (Owned by Channel Lead/Buyers)
Examples:
• Creative testing within a budget bucket
• Rotating in new creatives weekly
• Small bid or budget adjustments
Level 3: Cross-Functional Decisions (Requires Sync)
Examples:
• Offer changes
• Landing page changes
• Tracking schema changes
• Market expansion
Write these down. Otherwise your team either asks permission for everything or does everything and surprises leadership later.
What Are the Best Meeting Cadences for Media Buying Teams?
Here's the simplest weekly cadence that scales.
Daily (15-30 Minutes Per Channel)
Owner: Channel buyer
Checklist:
• Spend pacing vs plan
• CPA or ROAS vs guardrails (not vibes)
• Delivery issues (learning limited, disapprovals)
• Creative fatigue signals (frequency, CTR decay, CVR decay)
Log:
• Any budget change
• Any creative swap
• Any anomaly
Weekly Creative Sprint (60-90 Minutes)
Owners: Creative strategist + buyers
Agenda:
① Last week's learnings (what worked, what failed, why)
② Identify the next 3-5 hypotheses
③ Decide the next batch (concepts, angles, hooks, formats)
④ Assign owners and deadlines
⑤ Confirm launch date and QA
Deliverable: A written creative brief for each concept
Weekly Experiment Review (30-60 Minutes)
Owner: Head of Growth or Paid Social Lead
Rules:
• You don't "debate" results. You interpret them.
• Every test ends with a decision:
- Scale
- Iterate
- Kill
- Re-test with a tighter hypothesis
Monthly Measurement Sanity Check (60-90 Minutes)
Owner: Analyst or measurement lead
Review:
• Platform vs GA vs backend deltas
• UTM compliance rate
• Event tracking health
• Incrementality plan (do we need a lift test this quarter?)
This is where teams get smarter instead of just busier.
What Standards Does Every Media Buying Team Need?
If you skip this section, your structure will still fail because your reporting becomes untrustworthy.
1. Naming Conventions Are Not Aesthetics, They Are Retrieval
Your future self needs to answer questions like:
• "Which hook style wins for offer X?"
• "Which editor's variants convert best?"
• "What did we ship for UK prospecting last week?"
That requires structured naming.
AdManage has documentation specifically on configuring ad naming conventions and date formats, which is the kind of standard you want to enforce across accounts. Learn more about establishing naming conventions for your creative assets.
2. UTM Standards Are How You Prevent "Attribution Soup"
UTMs aren't for vanity. They're for answering:
• What creative drove quality traffic?
• What campaigns drove downstream revenue?
• What channel mix is actually working?
AdManage's UTM guide is a solid reference on why consistency matters and how to structure UTMs for paid social.
3. QA Is a Role, Not a Vibe
A real QA checklist catches:
→ Wrong landing page
→ Missing UTMs
→ Wrong pixel or event
→ Wrong identity or page
→ Wrong creative placement mapping
→ Wrong language variant
→ Wrong offer text
If you don't create QA ownership, you'll pay for it later in wasted spend and misleading conclusions.
Should You Automate Ad Operations or Hire More People?
High-volume teams drown in launch work. That's not strategy. It's repetitive configuration.
AdManage's public status page shows the scale of this problem: in the last 30 days, it reports 940,385 ads launched, 120,418 batches, and 109.7K hours of time saved.
That's not just a product metric. It's a signal that "launch labor" is a real bottleneck across the market.
What Automation Changes Structurally
If launching gets 10x faster, the optimal org chart changes:
→ Fewer people doing manual builds
→ More people doing creative strategy, analysis, and iteration planning
A concrete example from AdManage's bulk upload guide:
"Launch 50 basic ads" is shown as 2.5 hours manually versus 3 minutes via bulk upload. It also claims manual uploads can have a 12-15% error rate, while validated bulk workflows are under 1%.
Even if your exact numbers differ, the structural point holds:
The faster you can launch correctly, the more your team should shift toward learning and creative throughput.
Practical Implication for Structuring Your Team
You can choose one of two philosophies:
Philosophy A: Hire your way out
Add ad ops headcount as volume rises.
Philosophy B: Tool your way out
Automate repetitive work, keep the team lean, invest in creative plus measurement.
AdManage pricing is fixed fee (in-house £499/month, agency £999/month) with plan features like unlimited team members and launches, which supports the "tool your way out" model if you're scaling volume.
How to Use AdManage in Your Media Buying Team Structure
A simple way to think about tooling:
→ If your bottleneck is ideas and production, you need creative capacity and better briefs
→ If your bottleneck is launch labor and errors, you need ad ops ownership and/or automation
→ If your bottleneck is truth, you need measurement ownership and standards
AdManage is designed for the "launch labor" bottleneck:
• Bulk creation and launching across Meta and TikTok
• Naming and UTM control
• Post ID workflows for preserving engagement
• Standardized workflows across accounts
Useful internal references if you're building your team's operating system:
• Bulk launching and operational throughput
• UTM standards for paid social
• Creative testing framework basics
• Post ID and Creative ID workflows to preserve social proof
• Naming convention configuration
• How to automate Facebook ad creation
• Managing multiple Facebook ad accounts
If you automate launch work, you can design your team to spend more cycles on creative and learning, which is where the compounding returns live.
Who Should Own Measurement and Attribution on Your Team?
Attribution is messy, especially for:
→ Multi-touch journeys
→ Cross-device behavior
→ Privacy constraints
→ App install measurement via SKAN
If you run app installs, SKAdNetwork 4 introduced multiple postbacks and different conversion value types, which changes what you can observe and when.
Understanding multi-touch attribution versus last-click attribution becomes critical as methodology choices can flip conclusions.
Minimum Viable Measurement Ownership
→ Someone owns UTMs and naming standards
→ Someone owns event tracking health checks
→ Someone owns "platform vs analytics vs backend" reconciliation
→ Someone owns incrementality planning (lift tests, MMM when needed)
If nobody owns that, your team will spend half its time arguing about numbers.
How to Use RACI to Clarify Team Responsibilities
Below is a simple responsibility map you can adapt.
| Workstream | Responsible | Accountable | Consulted | Informed |
|---|---|---|---|---|
| Budget strategy + guardrails | Paid Social Lead | Head of Growth | Finance | Team |
| Experiment roadmap | Media Strategy Lead / Channel Lead | Paid Social Lead | Creative Strategist | Team |
| Creative briefs | Creative Strategist | Paid Social Lead | Channel Buyers | Creative Producers |
| Asset production | Creative Producers | Creative Lead / Creative Strategist | Buyers | Team |
| Campaign builds + launches | Ad Ops | Channel Lead | Creative Strategist | Team |
| QA + tracking verification | Ad Ops | Channel Lead | Analyst | Head of Growth |
| Optimization | Channel Buyers | Channel Lead | Analyst | Head of Growth |
| Reporting + attribution sanity | Analyst | Head of Growth | Channel Leads | Team |
| Documentation + standards | Ops Lead / Ad Ops | Head of Growth | Analyst | Team |
If you implement nothing else from this article, implement this.
What Order Should You Hire Your Media Buying Team?
This is the highest-leverage part for most companies.
Hire #1: A Real Media Buyer (Not "Someone Who Can Click Buttons")
Look for:
• Structured thinking
• Experiment design discipline
• Ability to explain causality (not just "it dropped")
Hire #2: Creative Throughput (Producer or Editor)
Because without creative supply, the buyer has nothing to test.
Hire #3: Creative Strategist (Or a Buyer Who Can Truly Do It)
Because most teams fail at turning data into briefs.
Hire #4: Ad Ops / Launch + QA
Because volume increases break tracking and slow down execution.
Hire #5: Measurement/Analyst
Because the cost of wrong decisions rises with spend.
If you hire in the opposite order (lots of buyers, no creative system), you get a busy team that learns slowly.
4 Common Media Buying Team Mistakes (And How to Fix Them)
Failure 1: Creative and Media Are Siloed
Symptom: Creative ships what they "like," media complains it "doesn't perform," nobody learns.
Fix: Install the creative strategist bridge role and a weekly creative sprint rhythm with structured testing frameworks.
Failure 2: Everyone Is Allowed to Change Everything
Symptom: Performance is unstable, nobody knows what caused what.
Fix: Decision rights + change logs.
Failure 3: Data Is Inconsistent, So Decisions Are Political
Symptom: Every meeting becomes an argument about attribution.
Fix: Measurement ownership, UTM and naming enforcement, monthly sanity checks.
Failure 4: Launch Work Eats the Team
Symptom: Buyers spend most of their day trafficking ads.
Fix: Either hire ad ops or automate. Bulk workflows exist for a reason, and the time-saved deltas can be massive at scale. Learn how to create multiple ads on Facebook efficiently.
What Are Media Buying Team Salaries in 2026?
Salaries vary heavily by location, seniority, and industry. Use these as ballpark anchors, not gospel.
| Role | Average Salary | Location | Date |
|---|---|---|---|
| Paid Social Manager | $77,966/year | US | Dec 2025 |
| Creative Strategist | $92,879/year | US | Jan 2026 |
| Ad Operations Specialist | $71,082/year | US | Jan 2026 |
| Performance Marketing Manager | £50,987/year | UK | Jan 2026 |
Budget note: If you're doing high-volume creative testing, the cost of slow execution and tracking mistakes often exceeds the cost of an additional role or the right tooling.
How to Restructure Your Media Buying Team in 30 Days
Week 1: Map Your Loops and Owners
→ Assign explicit owners for Creative, Launch, Optimization, Measurement
→ Write decision rights and guardrails
Week 2: Install Standards
→ QA checklist
→ "No launch without QA" rule
Week 3: Install Cadence
→ Daily channel checks
→ Weekly creative sprint
→ Weekly experiment review
→ Monthly measurement sanity check
Week 4: Fix the Bottleneck
Pick one:
→ Hire the missing role (creative strategist, ad ops, analyst)
→ Or implement automation that removes the bottleneck
Consider AdManage if launch labor is your bottleneck.
Frequently Asked Questions
What's the ideal size for a media buying team?
There's no magic number. Start with 5 people per team lead for agility. Scale through multiple pods rather than creating one giant team. Your ideal size depends on complexity factors: how many new creatives per week, how many active campaigns, how many markets, and how much QA risk you're managing.
Should I hire in-house or use an agency?
It depends on control, cost, and capability. In-house gives you more control, faster iteration, and deeper brand knowledge. Agencies offer expertise across channels and can ramp faster. Many companies use a hybrid: in-house for core performance channels (Meta, TikTok, Google) and agency for specialized channels (TV, programmatic, out-of-home).
When should I invest in automation vs hiring more people?
Invest in automation when your bottleneck is repetitive launch work, not strategy or creative thinking. If your team spends most of their time manually building campaigns, trafficking ads, and fixing tracking errors, automation will deliver better ROI than another headcount. AdManage's bulk upload tool shows 50 ads taking 2.5 hours manually versus 3 minutes automated. That math changes fast at scale.
How do I know if creative or media is my bottleneck?
Ask these questions: Are buyers waiting for creative (creative is the bottleneck)? Or is creative ready but sitting in a queue to launch (media/ad ops is the bottleneck)? Track "time from creative ready to live" and "creative requests fulfilled per week." If creative production is slower than your ability to test, that's your constraint.
What's the most important first hire?
After your first media buyer, hire for creative throughput (producer or editor). Most teams fail because they can't produce enough test variations, not because they lack optimization skills. A media buyer without creative supply has nothing to test. A creative producer without a buyer can't get feedback. You need both, but creative throughput unlocks testing velocity.
How do I structure creative and media to work together?
Install a creative strategist role as the bridge. Have them sit in (or lead) weekly creative sprints where buyers share performance data and together you decide the next batch of concepts. Use a standardized creative brief template so everyone speaks the same language. Make learnings update briefs within days, not weeks. Physical proximity helps too - seat creative and media teams together if possible.
What tools does my media buying team need?
At minimum: a bulk ad launch tool (like AdManage), an attribution or tracking platform, campaign management dashboards, and a way to enforce naming and UTM standards. Also consider: creative collaboration tools, project management systems, BI or analytics platforms, and automation for routine optimizations. The right tools can make a team of 5 perform like a team of 15.
How do I prevent attribution arguments on my team?
Give someone ownership of the "truth layer." That person (usually an analyst or measurement lead) owns UTM standards, event tracking health checks, platform vs backend reconciliation, and incrementality planning. Create a single source of truth dashboard everyone uses. Hold monthly measurement sanity checks to surface and fix discrepancies before they become arguments. Document clear definitions of what's counted where and why.
When does it make sense to use a tool like AdManage?
When launch labor becomes your bottleneck. If your team is spending hours manually creating and trafficking ads, if you're seeing 10-15% error rates in tracking, or if you're testing hundreds of creative variations and need bulk workflows, AdManage makes sense. It's particularly valuable for teams doing high-volume creative testing across Meta and TikTok with strict naming and UTM requirements. The fixed-fee pricing (£499-£999/month) means ROI improves as volume scales.
What's the difference between a media buyer and a media planner?
A media planner focuses on strategy: which channels to use, how to allocate budgets, when and where ads should run. They map out the game plan. A media buyer focuses on execution: negotiating placements, building campaigns, optimizing performance, analyzing results. They play the game. In small teams, one person often does both. As you scale, separating strategic planning from daily execution allows for deeper expertise in each.
Ready to Build Your High-Performance Media Buying Team?
The best media buying team structure isn't about copying someone else's org chart. It's about identifying your true bottlenecks, assigning clear ownership to the four critical loops (Creative, Launch, Optimization, Measurement), and giving your team the right tools to execute efficiently.
If launch labor is eating your team's time, you're not alone. Teams are launching nearly 1 million ads per month through automation because manual trafficking at scale simply doesn't work anymore.
Start by fixing your biggest bottleneck:
→ If it's creative throughput, hire a creative strategist and producer
→ If it's launch labor and errors, implement ad ops automation
→ If it's measurement trust, give someone ownership of your truth layer
Get started with AdManage to automate your launch workflow and free your team to focus on strategy, creative, and learning. Our fixed-fee pricing means you can scale volume without scaling headcount, and features like bulk launching, naming enforcement, UTM control, and Post ID preservation solve the operational bottlenecks that slow most teams down.
See how teams are launching 940K ads with AdManage →
Explore our documentation for naming conventions, UTM standards, and workflows →