TaskPacer
Back to TaskPacer

Methodology

How TaskPacer builds its advice

Last reviewed 2026-04-20

You set the pace. AI keeps up. You own the result.

AI drafts. You decide. Final responsibility is yours.

TaskPacer is built for professionals who want to move faster without handing judgment to a tool. We use AI as a drafting, comparison, summarizing, and workflow assistant. The human still chooses the goal, checks the facts, decides what good looks like, and owns the final output.

That is why our cards focus on concrete tasks instead of broad claims about jobs. A useful recommendation should tell you what to try, what inputs you need, what output to expect, what could go wrong, and when the source was last checked.

The point is not to make AI feel automatic. The point is to make your next responsible step clearer.

How the 20 launch professions were chosen

We chose the first 20 professions around one launch persona: non-technical professionals who use a computer every day and can share a useful result with a manager or team. The list favors roles with repeatable digital work, visible time pressure, and a realistic path from individual use to small-business buying intent.

The launch list is split into Tier 1 and Tier 2. Tier 1 contains the viral core: bookkeepers, customer service representatives, administrative assistants, recruiters, digital marketing specialists, project managers, B2B sales representatives, accountants, paralegals, and real estate agents. Tier 2 expands to adjacent roles once the core content system is stable.

We deliberately removed roles that did not match this launch focus, including engineering roles, lawyers as a primary launch role, compliance officers, EU legal experts, auditors, and logistics managers. Some were too technical, too enterprise-regulated, too field-heavy, or too far from the SMB viral path for the MVP.

How each profession's 10 tasks are derived

We start from the O*NET taxonomy for the profession's SOC code, especially Detailed Work Activities. The raw task list is treated as a seed, not as final product copy. We filter out tasks where current AI use is not realistically useful, such as physical work, high-stakes interpersonal judgment, or work that legally requires direct human decision-making.

Each remaining task gets an AI automation potential score. Tasks with a score of 0.0 are removed before ranking. The top 10 remaining tasks become the profession-specific task list. Tasks 1 through 5 show full solution cards; tasks 6 through 10 show a locked preview so users can see the scope before unlocking the full role plan.

When the same task appears in at least three launch professions, we move it into the canonical task bank. For example, a task like preparing reports can be written once as a shared workflow, then referenced by several profession pages with profession-specific wording and impact scores.

Impact score formula

The impact score ranks tasks by practical leverage. It combines how much time a task consumes with how much of that task AI can realistically assist or automate today. It is not a claim that a whole job can be automated.

`time_share_pct` estimates the share of work time spent on the task, using O*NET task data and occupational activity patterns. `ai_automation_potential` uses four bands: 1.0 for mostly end-to-end output generation, 0.7 where AI drafts and humans review, 0.3 where AI only assists, and 0.0 where the task is not included.

The score is capped at 100 so high-frequency, high-potential tasks do not distort the page.

raw_score = time_share_pct * ai_automation_potential * 10
impact_score = min(100, round(raw_score))

Digital Marketing Specialist example:
Create digital content strategies: 12 * 0.9 * 10 = 108 -> 100
Develop and execute online marketing strategies: 10 * 0.7 * 10 = 70 -> 70
Collect and analyse web metrics: 9 * 0.7 * 10 = 63 -> 63
Coordinate with web developers: 6 * 0.3 * 10 = 18 -> 18

Radar axis methodology

Every profession uses the same five radar axes: Creative & Generative, Analysis & Decisions, Coordination, Communication, and Interpersonal. The labels are shared so different profession pages can be compared, but the values are not shared. Each profession gets its own current and potential signature.

The current value estimates observed AI use in that task family today. The potential value estimates how much of the task family could be supported over a two-year horizon if the user adopted reliable workflows and tools.

The radar is a directional model, not a wage forecast or replacement forecast. It helps users see where AI is already common, where the upside is larger, and where human judgment remains central.

current[axis] = sum(time_share * current_AI_adoption_2026)
potential[axis] = sum(time_share * ai_automation_potential)

Canonical task bank rule

Some tasks repeat across many professions. Writing those cards separately would create drift: one page might recommend a better workflow than another for the same underlying task. The canonical task bank solves that by storing shared task solutions once.

A task becomes canonical when it appears, in substance, in at least three of the 20 launch professions. Profession pages can still override the title and impact score, because the same task may matter more for one role than another. But the solution cards come from the shared record.

In a profession JSON file, `solutions: null` with a `canonical_id` means the renderer should hydrate the task from `src/data/shared-tasks.json`.

Source evaluation framework

Every solution card needs sources that match the actual recommendation. We prefer first-party sources: official tool docs, release notes, prompt packs, help centers, official reports, or named operator practitioners with a visible workflow.

A source must pass hard gates before it can support a card: clear source identity, visible date, specific task-to-tool mapping, at least one extractable asset, non-generic operational detail, realistic self-serve access, and freshness for the AI workflow described. After that, it is scored on source authority, hands-on evidence, workflow specificity, self-serve accessibility, freshness, extractability, and evidence strength.

Verdicts are used internally: ACCEPT_PRIMARY for scores 15 and above, ACCEPT_SECONDARY for 12 to 14, WATCHLIST for 9 to 11, and REJECT below 9. A source also has to pass the TaskPacer fit test by supporting at least three useful elements: task, tool, workflow, prompt, output, metric, or caveat.

Agency positioning

You set the pace. AI keeps up. You own the result.

That sentence is the product's operating principle. The user chooses the work, the goal, and the standard of acceptable quality. AI can produce drafts, options, summaries, transformations, and checklists. It should not silently decide what is true, compliant, tasteful, or ready to send.

The short form appears on cards and footers: AI drafts. You decide. Final responsibility is yours. We repeat it because speed is only useful when the user stays in control.

Responsibility disclaimer

TaskPacer recommendations are educational workflow suggestions. They are not legal, medical, financial, tax, HR, or compliance advice. AI output can be incomplete, outdated, biased, or wrong. Verify important facts, review generated content before sending it, and follow your employer's policies.

For regulated or sensitive professions, the bar is higher. HR, finance, legal, tax, medical-adjacent, and compliance-related users should check jurisdiction rules, professional standards, client confidentiality obligations, and internal governance before using any AI workflow in production.

The regulated profession notice appears in the result body, not only in the footer, because users in these roles need to see the limitation before acting on the recommendations.

Last verified policy

AI tooling changes quickly, so every solution card carries a last verified date. A refresh means we re-check the source URL, confirm the tool or workflow still exists, review any changed documentation, and rerun the source framework if the claim depends on updated material.

Tool recommendations and best-practice cards are treated as the fastest-moving content. Profession task lists move more slowly because task structure changes less often than tool interfaces. Hourly rate defaults are reviewed on a quarterly cadence.

The date is not a guarantee that nothing changed after review. It is a transparency marker so users can decide how much extra checking they need before using a recommendation.

Refresh policy

Tool recommendations and best practices

7 days

Re-check source URLs, tool documentation, pricing or access details, workflow steps, and source-framework score when a source has changed.

Task lists per profession

90 days

Re-pull O*NET task data, review dropped tasks, recompute impact scores if task weights or AI exposure assumptions change.

Hourly rate defaults

Quarterly

Review BLS and wage-data sources and update default ROI assumptions where the underlying data has moved.

Responsibility note

TaskPacer provides educational workflow suggestions. AI output should be reviewed before it is sent, published, relied on, or used with customers, employees, clients, or regulators.

For HR, finance, legal, medical-adjacent, tax, compliance, and other regulated work, TaskPacer is not professional advice. Consult a qualified professional, your employer's policies, and applicable jurisdiction rules before acting on any recommendation.