A real-world hiring incident at a U.S. newsroom illustrates the pattern: a single engineering posting attracted 400+ applications in roughly half a day, followed by indicators of templated and potentially fraudulent submissions and even an impersonation scam targeting applicants.
The resulting market structure is a closed loop:
Candidates use AI to generate optimized narratives.
Employers use AI to reject most narratives.
Candidates respond by further optimizing for AI filters.
Employers harden screens further.
The loop is “rational” at each step, but collectively destructive: it compresses differentiation, raises false positives and false negatives, and shifts selection toward keyword conformity.
Recruiting used to be constrained by effort. A candidate could embellish, but producing dozens of tailored, persuasive applications took time. Generative AI removed that friction. When everyone can generate polished CVs and bespoke cover letters instantly, the surface quality of applications stops being informative.
In the referenced newsroom case, warning signs were operational rather than philosophical:
Repeated contact details across “different” candidates
Similar layouts and writing structures
Broken or empty professional profiles
Near-identical motivation statements
Blatant false claims of work performed
The employer eventually pulled the listing and shifted to internal sourcing. A separate scam then emerged: an impersonator used a lookalike email domain to send fake offers and collect sensitive financial information.
Net effect: the resume becomes cheaper to manufacture than to verify, and fraud scales faster than due diligence.
The premise is not that extraordinary people cannot succeed. The premise is that automated early-stage filters are structurally hostile to non-standard signals.
A useful illustration is Steve Jobs’ pre-Apple job application: handwritten, missing key contact details, and containing a naming inconsistency. In a modern workflow, missing contact data, nonstandard formatting, and “inconsistencies” are precisely the features automated systems penalize.
In parallel, employers increasingly rely on automated decisioning (or tools that function like it) because application volume is unmanageable manually—especially for remote-eligible roles where candidate pools are global.
Core mechanism: systems designed to reduce employer risk reduce variance—thereby reducing the probability of admitting outliers, including positive outliers.
Candidates generate multiple role-specific CV variants and cover letters at scale, matching keywords and competency frameworks.
Employers deploy automated screening to control volume and detect fraud patterns. In doing so, they increase the number of hard filters (keyword presence, credential requirements, formatting, timeline consistency, portfolio links, identity checks).
Candidates learn the filters (or buy tools that do), then optimize outputs to pass them. This increases homogeneity further and pushes fraudsters to blend into the same “approved” patterns.
The average application becomes less trustworthy; employers rely more on machine screening and less on human judgment; unconventional profiles are increasingly discarded.
The newsroom incident demonstrates early-stage symptoms: sudden volume spikes, templated similarity, and a downstream scam ecosystem that attaches itself to high-traffic job posts.
This is not only a hiring quality issue; it is also an operational risk issue.
Remote hiring channels have been exploited using deepfakes and stolen personal data, including attempts to access sensitive roles.
Some schemes involve fraudulent remote IT work arrangements, infrastructure manipulation (including “device relay” setups), and money laundering patterns.
Algorithmic screening can replicate historical bias if trained on biased data or proxies, creating legal and reputational exposure.
Hiring-related automated decision tools are increasingly treated as regulated risk surfaces—driving requirements for governance, transparency, and oversight.
Bottom line: the AI hiring loop is tightening at exactly the moment regulators are raising expectations for explainability and fairness.
No recruiter wants to miss a great candidate. But under extreme volume, the first mandate becomes throughput and risk reduction. If 1,000 applications arrive, the operational incentive is to automate triage and reduce time-to-shortlist.
That creates a selection function aligned to:
Credential legibility over capability
Keyword match over demonstrated problem-solving
Consistency signals over creative variance
Low perceived risk over high-upside ambiguity
This is also reinforced by vendors productizing automation across sourcing, screening, and workflow management to compress hiring cycle time.
Startups historically win by finding asymmetric talent—people who are early, weird, self-taught, non-credentialed, or simply misfit for large-company molds. When startups adopt large-company screening logic (or buy it off the shelf), they inadvertently sabotage their comparative advantage.
This is why the “Gates or Jobs” thought experiment resonates: not because of celebrity, but because both are archetypes of high-signal, low-compliance profiles. Jobs’ messy application is a proxy for the broader category: candidates who are strong but don’t package themselves in corporate HR dialect.
The fix is not “ban AI.” The fix is rebalancing signals: reduce reliance on narrative documents and increase reliance on authenticated, real-time demonstration.
Use a short, structured intake (identity + basics) → immediate work-sample gate → only then the resume. This makes AI polishing largely irrelevant because selection is driven by performance.
Deploy AI for anomaly detection (template similarity, repeated contact elements, portfolio link integrity, domain impersonation patterns), while keeping human ownership of advancement decisions.
Create a protected pathway for unconventional candidates: referrals, open-source contributions, portfolio walkthroughs, and founder-reviewed submissions. The goal is to counteract variance suppression caused by automated filters.
Adopt staged verification proportional to role sensitivity—stronger checks for roles with system access, lighter checks early—without turning the process into a barrier only privileged candidates can clear.
If automated tools are used to screen or rank, implement bias audits, candidate notice, documentation, and appeal paths consistent with modern compliance expectations.
The hiring market is drifting toward a robot-to-robot interface, where candidates generate machine-optimized identities and employers deploy machine-optimized rejection. In that equilibrium, the most compliant narratives win—not necessarily the most capable humans.
The organizations that outperform will be the ones that treat AI as a fraud-and-workflow accelerator, not as a substitute for talent judgment—and that deliberately engineer an outlier-detection lane so the next exceptional builder is not filtered out for lacking the right formatting, the right keywords, or the right kind of résumé.