How to Evaluate and Choose the Right SEO Agency

Choosing between seo agencies can feel like guesswork when budgets are tight and local visibility matters. This guide gives Malaysian SMEs a practical, step by step framework to evaluate proposals, verify claims, and compare agencies against real business KPIs while protecting brand and UX. Read on for a weighted scoring matrix, sample interview questions, and a 90 day kickoff checklist to secure quick wins and measurable progress.

1. Establish what you need from an SEO partner

Start with outcomes, not tasks. Before you contact seo agencies, define the business results you need: monthly organic revenue, number of qualified leads, phone calls from Google Business Profile, or visibility for keywords in specific Malaysian cities. Agencies will sell activity; you must buy outcome. Without crisp KPIs you will compare proposals on price and busywork instead of impact.

Scope, ownership and the obvious tradeoffs

Be explicit about scope boundaries. Spell out whether you want local citations, on page content creation, technical remediation, link acquisition, CRO experiments, or collaboration during a web redesign. The common tradeoff is speed versus ownership – project work delivers discrete fixes faster, retainers buy continuous improvement but require clear monthly deliverables to avoid scope creep.

  • Essential integrations: Google Analytics 4, Google Search Console, and Google Business Profile with agency access rather than PDFs for transparency
  • Tool access: allow Screaming Frog, Ahrefs or Semrush audits to be shared and timestamped so claims are verifiable
  • Reporting cadence: require a dashboard or weekly snapshot and one source of truth for metrics to avoid conflicting reports

Concrete example: A neighbourhood F&B brand in Kuala Lumpur wanted more walk in customers and takeout orders. The right brief highlighted GBP optimisation, menu and local schema, two landing pages for delivery areas, and a three month content calendar for high intent local keywords. That focused scope let shortlisted seo agencies give realistic time to value and a fixed-price pilot for local citations and a priority landing page.

Practical judgment: Many businesses insist on ranking guarantees and then get sold vanity metrics like impressions. For Malaysian SMEs, prioritise conversion metrics and time to first measurable win – phone calls, GBP actions or booking form submissions – over chasing page one for generic keywords. Also insist on ownership: content and CMS changes must remain with your account and not be locked behind the agency.

Prepare a one page brief: 3 business KPIs, current monthly organic baseline, your budget range, and the three services you require. This single page will filter out agencies that bid on tactics instead of outcomes.

If you want a template for this brief or examples of what to include for local citations and technical audits see our SEO services and Google guidance on search basics at Google Search Central. For local citation tools and practices consult BrightLocal.

Next consideration – convert this brief into an RFP and ask for a short paid pilot so you can judge analysis quality and communication before committing to a retainer.

2. Capability checklist to evaluate agencies

Demand artifacts, not slogans. When you vet seo agencies, the difference between a contractor and a partner shows up in deliverables: concrete audit exports, timestamped analytics, documented playbooks, and a clear handoff plan for your team. Anything else is marketing copy.

Technical hygiene and performance

What to expect: a reproducible crawl report, prioritized fixes with estimated impact, and examples of implemented structured data. Tradeoff: deep technical work often requires developer time and may delay visible traffic gains while stabilising the site.

Local visibility and citation management

Concrete check: ask for a sample Google Business Profile optimisation they executed and a list of local directories they use in Malaysia. Many vendors treat citations as checkbox work; the ones that deliver value map citations to location-specific landing pages and GBP category strategy.

Content architecture and production

Beyond blog posts: a capable agency shows topical clusters, editorial briefs, and content hooks tied to purchase intent. Limitation: high quality content takes time and budget; cheap monthly packages that promise X articles with no strategy are almost always wasted.

Link acquisition approach

Ask for method, not metrics. Prefer agencies that describe outreach workflows, journalist or PR relationships, and examples of editorial links in Ahrefs exports. Reject sellers who rely on private blog networks or vague bulk link claims – they create short term lifts and long term risk.

Analytics, tracking and conversion focus

Measure outcomes. The right partner wires GA4, Search Console and event tracking, and ties organic traffic to revenue or leads. Agencies that default to impression reports without conversion mapping are optimising the wrong thing.

Cross functional execution – design, dev and ads

Reality check: SEO that ignores UI and paid channels moves slower. Expect collaboration plans showing how the agency will work with your web devs, social team or PPC vendor and provide prioritized UX recommendations tied to traffic segments.

Concrete Example: An ecommerce brand in Petaling Jaya engaged an agency that combined a technical speed audit, rewritten product descriptions with schema, and localised category pages. Within two months the site regained crawlability after a JS rendering issue was fixed and purchases from organic long tail queries began to appear in GA4 conversion reports.

  • Verification artifacts to request: timestamped Google Search Console screenshot of a keyword lift
  • Also request: a CSV export of referring domains from Ahrefs or SEMrush for a claimed campaign
  • And: a Screaming Frog or Sitebulb crawl export showing the top 10 technical fixes and their status
Key takeaway – weight proof over promises. Shortlist agencies that give you audit exports, a mini action plan for 30 days, and a clear developer handoff. That separates noise from suppliers who can actually execute.

3. Verify claims with reproducible evidence

Start with access, not screenshots. Agencies can prepare polished before and after images; what separates a vendor from a partner is whether they will give you time limited access to the raw data that produced those images. Live access or time stamped exports remove ambiguity and expose whether results were organic, paid, or fabricated.

Concrete checks that actually work

Request reproducible artifacts. Ask for date stamped CSVs or API exports from Google Search Console, GA4, and Ahrefs covering the period the agency claims improvement. A screenshot of a graph is not evidence. If an agency objects, treat that as a material red flag unless there is a documented NDA with the client that explains the restriction.

  1. Get temporary, read only access. 7 to 14 day view access to Google Search Console and GA4 lets you confirm keyword gains, clicks to pages, and conversion events.
  2. Ask for raw crawl exports. Provide a page list and request Screaming Frog or Sitebulb crawl exports showing before and after status codes, indexability flags, and canonical tags.
  3. Validate link claims. Request a referring domains export from Ahrefs or Majestic and check growth velocity, anchor text diversity, and referring IP ranges to spot bulk network links.
  4. Cross check dates. Compare claimed improvements to known Google algorithm updates at Google Search Central and to any paid campaigns you run that could explain traffic spikes.
  5. Demand an implementation trail. Ask for ticket IDs, pull request references, or a changelog for technical fixes so you can confirm work was deployed rather than only proposed.

Practical tradeoff: live access is the best proof but it requires trust and basic permissions from your existing vendors. If a previous agency or client will not grant access, require recorded walkthroughs that show API query timestamps and context, and insist on a signed statement that the exports match the original source accounts.

Common scam signals to watch for. Rapid, large spikes in referring domains with identical anchor text, GSC screenshots with cropped date ranges, or agencies that only show aggregate metrics without page level detail. These patterns often precede penalties or ranking volatility and are evidence of short term manipulation rather than sustainable SEO.

Concrete example: A Kuala Lumpur legal practice shortlisted two agencies. The selected agency provided 10 day read only Google Search Console access and an Ahrefs CSV export showing gradual backlink growth from 18 to 42 domains over six months with varied anchor text. The other vendor submitted three static PDFs. The live access revealed that the PDFs summed paid traffic with organic traffic, which changed the expected ROI and eliminated that vendor from contention.

Action to take now: Require either 7 day read only access to analytics and search console or a time stamped API export for every case study the agency cites. Do not accept screenshots as sole proof.

Judgment: agencies that balk at sharing raw data often have legitimate confidentiality concerns, but that is different from refusing all verifiable proof. Good partners will offer one of three options: temporary access, a recorded API export, or a reference who can confirm specific outcomes. If none of those are possible, move on.

4. Create a structured RFP and compare pricing models

Start with outcomes in the RFP. A short, disciplined RFP forces vendors to state what they will deliver, when, and at what cost. If you leave deliverables vague you will get proposals full of activity lists that are impossible to compare and easy to upcharge.

Core items your RFP must demand

  • Business outcomes and KPIs: specific targets like monthly qualified leads, GBP actions or ecommerce conversions rather than generic traffic increases
  • Baseline assets and access: list the accounts you will grant access to and the format you expect deliverables in, for example timestamped CSV exports or deployable CMS patches
  • Pilot scope and acceptance criteria: a 5 to 15 hour paid audit or a single priority landing page build with clear acceptance tests and timeline
  • Itemised SOW and billing cadence: estimated hours, tasks, milestones, and what is included in the retainer vs billed as additional work
  • Ethics and ownership clauses: a clause banning manipulative link networks, plus content and code ownership reserved to your business

Tradeoff to understand. Detailed RFPs slow down vendor responses and will reduce the number of bids, but they eliminate guesswork and surface hidden fees early. If speed matters, require a short pilot as your decision point rather than a loose monthly scope.

Pricing models compared. Monthly retainers are reliable when you need steady, multi month work and close collaboration with developers or designers. Fixed price projects fit one time audits or migrations where scope is clear. Hourly consulting works for ad hoc strategy, but it creates incentive for time rather than outcomes. Performance based fees can sound attractive, but in practice they invite disputes over attribution and encourage risky tactics unless measurement and guardrails are ironclad.

Model Best fit Key downside
Monthly retainer Ongoing optimisation and cross functional work with dev or social teams Requires strict monthly deliverables to avoid scope creep
Fixed price project Discrete tasks like migrations, audits or single page builds Change requests escalate costs quickly
Hourly / advisory Short term strategy or troubleshooting Hard to forecast total cost
Performance based When outcomes are easily measurable and low risk to manipulate Prone to attribution disputes and incentivises short term link tactics

Concrete example: A Kuala Lumpur retailer issued an RFP that required a paid 10 hour technical audit and a fixed price delivery for three GBP and landing page fixes. One vendor quoted a low monthly fee but excluded developer hours; another gave a higher retainer that included two developer days per month. The retailer chose the latter because it removed the surprise engineering fees that would have doubled the true cost.

Actionable step: Insist on an itemised SOW and a short paid pilot. Require proposals to show total cost of ownership including expected content production, developer time, and any third party tool subscriptions such as Ahrefs or local citation services.

If you need a starting RFP template tailored for Malaysian SMEs see our SEO services page at ArtBreeze SEO services and review operational guidance on verifiable metrics at Google Search Central.

Final consideration: pick the pricing model that aligns incentives with your 90 day plan and force measurable milestones into the contract before you pay the first month.

5. Interview the team and run a short trial audit

Hire the team before the brand story. The live interviews and a compact, paid trial audit reveal more about how an agency actually works than any case study. A focused 5 to 10 hour test shows their diagnostic rigour, clarity of recommendations, and whether they can translate SEO into developer-ready tasks and business outcomes.

Who to speak with and what to judge

Talk to three people at minimum. Your account manager for communication rhythm and prioritisation, the technical lead for diagnostics and deployment approach, and the content strategist to vet topical thinking and briefs. If link building is important, add the outreach/PR lead to probe real relationships and past placements.

  • Account manager: responsiveness, cadence, escalation path, tooling (Trello, Asana or Basecamp).
  • Technical lead: ability to explain a fix without jargon, references to Screaming Frog or render tests, and how they measure impact.
  • Content strategist: sample brief structure, keyword grouping logic, and approach to localising copy for Malaysian cities.
  • Outreach/PR lead (optional): examples of editorial placements and the steps they take to vet publishers.

Run a paid 5-10 hour trial audit and what to expect

Set a narrow scope and deliverable list. Ask for a short audit on one priority page or a single site section with: a short diagnosis, three prioritized fixes with estimated effort and impact, one deployable CMS patch or code note, and two content briefs. Make the output actionable — not an academic PDF.

  1. Diagnostic depth: are explanations tied to measurable signals (GSC, GA4, crawl logs) or just opinion?
  2. Actionability: do recommendations include exact steps, owners, and estimated hours?
  3. Local sensibility: do suggestions account for Google Business Profile, Malaysian directories, or multilingual needs?
  4. Communication and process: was the work handed over via a ticket, a pull request, or a clear changelog?

Concrete Example: A boutique guesthouse in George Town commissioned an eight hour trial audit on its booking funnel and Google Business Profile. The agency identified missing booking schema, a mobile button overlap causing failed taps, and a GBP category mismatch — then supplied a prioritized patch list and two content micro-briefs. The guesthouse implemented the top fixes and saw a visible lift in booking form submissions and GBP actions within six weeks.

Practical judgment and a tradeoff to accept. Short trials won't prove long term link acquisition or sustained content velocity, but they do expose methodology and honesty. Beware agencies that use the trial as free labour: insist on a paid scope and clear ownership of any work product. If an agency refuses a small paid pilot without a valid confidentiality reason, that is a legitimate red flag.

Action now: Commission a paid 5–10 hour trial focused on one priority page. Require a prioritized action list, deployable instructions, and a 30 minute walkthrough. Use this test to score diagnostic quality and fit before signing a retainer. For a starter brief template see ArtBreeze SEO services and for technical validation refer to Google Search Central.

6. Use a scoring matrix to make the final decision

A scoring matrix forces clarity. When proposals look different and sales pitches sound persuasive, a weighted matrix reduces the choice to repeatable math and documented judgement — not gut feel.

Build the matrix around your business impact

Pick criteria that map to outcomes, not tasks. Translate your top business KPI into 6 to 8 criteria — for example technical execution, local SEO strength, content strategy, evidence and reporting, developer access, and price. Assign each criterion a weight that reflects how much it moves the needle for your business (weights must sum to 100).

Use a consistent scoring scale, such as 1 to 10, and write short rubrics for the key breakpoints. Example rubric for technical execution: 9-10 = full crawl exports + deployed fixes and changelog; 6-8 = solid audit with clear developer tasks; 1-5 = vague recommendations without proof of deployment. Anchoring scores like this prevents vendors from being compared on feel alone.

Criterion Weight (%) Agency A (score) Agency B (score) Agency C (score)
Technical execution 25 8 6 9
Local SEO / GBP expertise 25 7 9 6
Content strategy 20 8 7 5
Evidence & references 15 9 5 6
Price and TCO 10 6 8 7
Cultural fit / comms 5 8 6 8

Practical tradeoff: a matrix privileges what you measure. If you overweight price to save cost now, you may exclude vendors that include developer time and therefore deliver faster wins. Conversely, overweighting evidence can favour conservative, slower vendors who minimise risk but delay aggressive growth.

Concrete example: A skincare clinic in Shah Alam used a weighted matrix prioritising local SEO (30 percent) and developer-inclusive technical work (30 percent). One higher cost agency won because their score reflected included developer days and GBP remediation; the clinic accepted the higher retainer knowing it removed monthly surprise engineering bills and produced measurable booking increases in the first 90 days.

Run a sensitivity check. After scoring, change the most important weight by +/-10 to see if the winner flips. If the choice swings wildly with small weight changes, your criteria are too fragile or the shortlist is too homogenous — dig deeper into qualitative differences and reference checks.

Use the matrix to narrow to two finalists. Then combine the numeric result with a short qualitative memo on trust, evidence access, and how they handle developer handoffs before you sign.

Minimum viable matrix: 6 criteria, 1–10 scoring scale, weights sum to 100. Require a short rubric for each criterion and capture one line of evidence (link or screenshot) that justifies each score.

Finally, remember the matrix is a decision aid, not a substitute for reference calls or a trial audit. If scores and references disagree, prioritise lived evidence — timestamped analytics access, deployed fixes, and client conversations — over a marginally higher spreadsheet score. When in doubt, fund a small pilot to validate assumptions before committing to a long retainer.

7. Onboarding, first 90 day plan, and reporting cadence

Start with an access matrix, not assumptions. The single biggest cause of slow starts is unclear permissions: analytics you cannot view, CMS accounts you cannot publish to, or developer windows that never materialise. Insist on a signed access matrix in week one that lists account roles, who will approve deploys, and the agency escalation path.

First 48 hours – permissions and triage

Immediate actions: get time limited read access to GA4, GSC, and your Google Business Profile; provide staging and production CMS credentials; and give the agency a single priority page to audit. Tradeoff: broad access speeds diagnosis but requires clear audit logs and a temporary account for the agency to reduce security exposure.

Days 3-30 – baseline and quick wins

What to expect: in the first month the agency should deliver a short baseline report that ties traffic to conversions, a 10-item urgent fix list with deployable instructions, and an activated dashboard for daily monitoring. Prioritise fixes that unblock conversions – tracking gaps, mobile tap issues, booking form failures – because those show value faster than long keyword plays.

Days 31-90 – build, test, and iterate

Implementation rhythm: move from triage to execution: scheduled deploys for technical items, a 60 day content calendar with two publishable pieces, and the first round of outreach for high quality links. Run one CRO experiment on your highest traffic landing page and treat the result as learning rather than a single win. Limitation: backlinks and topical authority take months; expect meaningful organic rank movement after 90 days but not complete category dominance.

Report section Primary consumer Cadence
Executive KPI snapshot (organic sessions, conversions, GBP actions) CEO / Founder Monthly
Tactical tracker (tickets, deploys, urgent fixes) Product / Dev Weekly
Experiment log and content pipeline status Marketing / Content Biweekly

Reporting tradeoff: more reports create noise. Ask for one single source of truth – a Looker Studio dashboard or similar – plus short written notes that explain causes for any KPI swings. Weekly updates should be 10 lines max; monthly reports should explain variance and next month priorities.

Concrete example: An independent optometry clinic in Subang Jaya gave an agency staged access and one priority: appointment form conversions. In 30 days the agency fixed a broken mobile CTA, implemented appointment schema, and stitched GA4 goals. By day 75 the clinic saw a measurable lift in submitted appointment forms and a 35 percent increase in GBP direction requests tied to the new tracking.

Onboarding SLA to insist on: weekly deployment windows, 48 hour response time for critical tickets, acceptance criteria for pilot deliverables, and a payment hold of 10 percent until the 30 day pilot acceptance is signed.

Force the 90 day plan into the SOW: specific deliverables, who owns each task, and one measurable KPI tied to the pilot payment. If the agency will not commit, it is a signal about delivery discipline.

You make like