Inside the First Wave: How Supercomputer Access Is Really Decided

A call-for-proposals push is circulating for first-wave supercomputer access. We explain how projects are chosen—and what a credible bid looks like.

A call-for-proposals push is circulating for first-wave supercomputer access. We explain how projects are chosen—and what a credible bid looks like.

Supercomputer Call-for-Proposals: How First-Wave Science Gets Picked

A call-for-proposals push is circulating for “first-wave” scientific applications—confirmed as official announcement activity via a published call from the Oak Ridge Leadership Computing Facility (OLCF) tied to its Discovery Center for Accelerated Application Readiness (CAAR).

The obvious framing is supercomputer hype: a new machine, a new logo, and a scramble for prestige. The more useful question is procedural: what actually wins when “first-wave” slots are scarce and the platform is still being shaped?

The story turns on whether reviewers reward ambition alone—or ambition that is engineered to run, scale, and teach the facility something on day one.

Key Points

  • The OLCF CAAR call is specifically designed to get AI, modeling and simulation, workflow, and data-heavy applications ready for Discovery, a system that is expected to be available in 2028, with proposals being accepted from January 12 to March 16, 2026, and decisions being announced in the first week

  • Confirmed evaluation signals emphasize the importance of scientific advancement, the ability to transfer results to a broader community, team commitment, and the quality of the challenge problem, including the figure of merit and the acceleration plan.

  • “First-wave” suitability is less about being first chronologically and more about being representative, scalable, and instructive—projects that stress the system and validate the workflow the program needs.

  • A believable compute plan now includes proof (like benchmarks, scaling, and clear performance goals) along with a plan for how to carry it out—similar to how other big allocation programs separate scientific impact from technical readiness

  • Unknown unless officially stated: the weighting of CAAR evaluation criteria and how reviewers trade off “breakthrough potential” versus “engineering maturity” across domains.

  • Globally, different programs formalize the same basic idea: proposals move through administrative checks, technical assessment, and scientific/domain review, then ranking and final allocation decisions.

Background

“First-wave science” is shorthand for the projects that run early enough to influence not just papers but platform defaults: which libraries get hardened, which workflows become “standard,” and which science domains get treated as flagship use cases.

Confirmed

The published CAAR call describes Discovery as arriving in 2028 and positions CAAR as an early program to prepare applications—explicitly inviting even teams with limited GPU acceleration and/or limited scalability. It also lists concrete team responsibilities (including a challenge problem, a figure of merit measured on Frontier, and an acceleration plan targeting at least 5× improvement) and describes support resources such as staff assistance, training sessions, and compute allocations on Frontier and early hardware.

Disputed or unclear

The definition of "first user" frequently becomes ambiguous in public discourse. In many early-access programs, “first-wave” participation can mean structured readiness work (benchmarking, porting, workflow validation) long before full production access. For CAAR, Discovery is not described as available now; it is described as arriving in 2028, with early access offered to accepted projects as part of the program’s preparation pathway.

Unknown (unless officially stated)

For CAAR specifically, the call describes what will be evaluated but does not publish a numeric weighting or a formal scoring rubric in the visible call text. Timelines beyond the proposal window and decision announcement—such as how “early access” is staged as Discovery approaches—should be treated as unknown unless later clarified in official program materials.

Analysis

Who is eligible—and what reviewers actually prioritize

Eligibility is often the first misunderstanding. In some programs, it’s broadly open (academia, labs, industry) but bounded by governance rules; in others, it is explicitly regional (for example, EuroHPC regular access is tied to eligibility rules and governance in its call documentation).

For CAAR, the public call language emphasizes inviting teams across domains and experience levels, including those who have struggled with GPU acceleration and scalability, and it frames acceptance around scientific advancement, community transfer, and contributions to programming models/algorithms.

Across the ecosystem, reviewers tend to prioritize a consistent set of signals:

  • Impact and scientific/technical merit serve as the primary indicators.

  • Feasibility/readiness as the filter: is this team prepared to use the machine effectively?

  • Team capability: the necessary personnel and time are available to execute the project, not just to propose it.

That pattern is explicit in other flagship allocation regimes. For example, INCITE uses peer review for merit and impact alongside technical assessment for readiness, with “potential impact” described as the predominant determinant for awards.

What makes a proposal “first-wave” suitable

A first-wave proposal is not merely “important science.” It is science that helps a facility answer uncomfortable early questions:

Will real applications run at scale?
Do the data paths hold up under load?
Do the software stacks behave outside of curated demos?
Do programming models and algorithms generalize across domains?

CAAR makes that philosophy explicit by requiring:

  • A challenge problem intended for Discovery,

  • A measurable figure of merit (FOM) benchmarked on Frontier,

  • An acceleration plan targeting at least a 5× improvement, plus a meaningful team time commitment.

So “first-wave suitable” usually means:

  • Representative: a real workload others in the field will recognize.

  • Scalable: even if imperfect today, it has a credible path to extreme scale.

  • Instrumental: it yields reusable methods, not one-off heroics.

What compute plans look credible now

Credibility in 2026 is increasingly evidence-driven. The strongest compute plans typically read like an engineering brief with scientific intent:

They specify where the time goes (compute vs preprocessing vs I/O), what bottlenecks dominate (communication, memory bandwidth, I/O contention), and how performance will be measured. CAAR formalizes this with the FOM and acceleration plan expectation.

Elsewhere, the same principle shows up as a technical assessment requirement: provide scaling/performance data and demonstrate feasibility on the intended class of system.

A weak compute plan asks for a huge allocation and promises “optimization later.” A credible one does the reverse: it shows what is already understood, what remains unknown, and what the team will do in the first 30–90 days to de-risk the run.

What do credible data and workflow plans look like?

First-wave proposals quietly win or lose in data plans, as operational reality judges early platforms.

The most credible proposals:

  • Describe the end-to-end workflow (ingest → compute → postprocess → archive → reproducibility).

  • Quantify storage and I/O patterns and explain mitigation (staging, compression, and checkpoint strategies).

  • Address governance and access for datasets, especially if AI training data is involved.

CAAR’s framing makes data part of the point: the program is positioned as a pathway to day-one applications and, more broadly, to generating datasets used for AI workflows in the larger platform context.

What partnerships strengthen bids

Partnerships matter less as logos and more as demonstrated execution capacity.

The strongest bids typically show:

  • The strongest bids typically demonstrate a tight pairing between a domain science lead and a computational performance lead.

  • Evidence of institutional support: sustained staff time, not only student cycles.

  • The system integrates with facility support and vendor expertise when available.

CAAR includes partnerships in its support model—direct help from OLCF and resources from the ORNL HPE/AMD Center of Excellence are available to accepted teams—so proposals that can use this support well (with clear goals, identified challenges, and dedicated staff) have a better chance of success.

What Most Coverage Misses

The hinge is simple: first-wave selection is platform strategy disguised as scientific selection.

The mechanism is incentives. Early-access programs are not only awarding compute; they are buying down risk for the facility. Reviewers and program staff need projects that (1) produce credible science and (2) expose the real constraints—software maturity, I/O pain, workflow fragility—early enough to fix them.

The signposts to watch are operational, not rhetorical. Look for official materials that clarify (a) whether proposals are scored with an explicit rubric or weighting, (b) the expected balance between “breakthrough potential” and “readiness,” and (c) how early-access phases will be staged as the machine approaches production. In CAAR’s case, the published text describes evaluation categories and a decision window, but not an explicit weighting model.

What Happens Next

In the CAAR pathway, the published sequence is straightforward: proposals are submitted during the open window, submissions are evaluated against the stated criteria, and decisions are announced in early May 2026, with planning for a kickoff workshop in early June 2026.

In the broader supercomputing world, the selection pipeline tends to converge on a familiar multi-stage process: administrative checks, technical feasibility review, domain/scientific scoring and ranking, and then a final resource-allocation decision by a governing or allocation body, followed by award acceptance and onboarding.

The practical implication is that “what happens next” is not a single committee vote. It is a funnel. Each stage rewards a different kind of clarity: compliance first, feasibility next, impact last—and only then the final tradeoffs when resources run out.

Real-World Impact

A materials lab with a promising simulation can become “first-wave ready” by turning its best idea into a benchmarkable challenge problem, then using facility support to squeeze out the 80% of performance the science depends on.

A public-health modeling team can win first-wave interest if its workflow is reproducible and data-governed—because nothing embarrasses a new platform faster than a pipeline that cannot be rerun.

A fusion or climate group can become disproportionately influential if its code becomes the early “reference workload” others copy, because facility tuning and tooling then evolve around its needs.

A smaller research team can compete if it is honest about bottlenecks and commits real engineering time—because first-wave programs often value teachable, transferable improvements over glossy, fragile claims.

The First Allocations Write the Playbook

The most defensible way to understand “first-wave” calls is not as a prize, but as agenda-setting.

If your project is selected early, it can influence which scientific domains become the platform’s default showcases, which software pathways get institutional support, and which performance techniques become the norm. That is why access is a strategy: the first wave does not just run on the system—it helps define what the system is for.

Previous
Previous

Google Play Settlement Notices Are Exploding Again—Here’s How to Tell If Yours Is Real

Next
Next

“AI Makes Cars Safer”: This Is the Evidence We Should See