What each field requires, what sufficient vs. insufficient looks like, and how failure events are logged. For researchers and fellows writing log entries.
| Field | What it captures | Most common failure mode |
|---|---|---|
| WHAT | The specific task or experiment attempted | Too general โ "worked on pipeline" instead of the specific action taken |
| WHY | Reasoning, alternatives considered, decision logic | Absent โ this is the field most lost during personnel transitions |
| HOW | Exact methodology, code logic, steps taken | Too high-level โ cannot be reproduced from the entry alone |
| ENVIRONMENT | Library versions, cloud config, credentials (names only), dataset checksums | Incomplete โ missing version numbers or dataset identifiers |
| RESULTS | Actual outcome including failures, error messages, stack traces | Euphemized โ "had some issues" instead of verbatim error output |
| QUESTIONS | Uncertainties, open hypotheses, follow-up threads | Left blank โ "done" is treated as equivalent to "no open questions" |
tenacity library retry decorator to scrape_page(). Config: wait_exponential(multiplier=1, min=4, max=60), stop_after_attempt(3), retry_if_exception_type(RateLimitError). Implemented custom RateLimitError exception raised on HTTP 429. Added jitter via random.uniform(0, 2) added to base wait. Tested with mock server returning 429s for first 2 requests."
scraper-sa@[project].iam (role: Storage Object Creator) | Target dataset: LinkedIn job postings, snapshot 2024-03-15, SHA256: a3f9..."
RetryError after max attempts on 14% of requests โ higher than expected. Stack trace: tenacity.RetryError: RetryError[<Future at 0x... state=finished raised RateLimitError>]. Root cause unclear: may be IP-level throttling rather than account-level. Need to test with rotating source IPs."
| Event Type | MVAL Treatment | Required Fields | Critical Obligation |
|---|---|---|---|
| Successful pipeline run | Standard MVAL entry | All six fields | โ |
| Failed pipeline run | Standard MVAL entry โ identical rigor to success | All six + error verbatim in Results | Do not summarize error messages. Paste them. |
| Partial / ambiguous result | Standard MVAL with explicit uncertainty stated | All six + explicit uncertainty in Questions | Label it partial. Do not present ambiguity as resolution. |
| Abandoned approach | MVAL entry logging the rejection reasoning | Why (critical) + Results (why stopped) | Why you stopped is as important as what you tried. |
| Undocumented prior decision | Retroactive MVAL reconstruction | Why + How + note that this is reconstructed | Mark it explicitly: "Reconstructed [date] from memory / code review." |
Before submitting any MVAL entry, verify all of the following. An entry fails compliance if any item is unchecked.
| Check | Field | Verification question |
|---|---|---|
| โก | WHAT | Can a new team member reconstruct what was attempted from this field alone? |
| โก | WHY | Does this field explain why this approach rather than the alternatives? |
| โก | HOW | Is this reproducible from this field alone โ without asking anyone? |
| โก | ENVIRONMENT | Are all library versions, cloud specs, and dataset identifiers present? No raw keys? |
| โก | RESULTS | Are errors quoted verbatim? Is ambiguity labeled as ambiguity? |
| โก | QUESTIONS | Are open questions stated โ even if the task is complete? |
The standard MVAL includes an Environment field designed for cloud computing contexts โ library versions, compute instances, API credentials. For executive education and policy research contexts, "environment" refers to organizational and analytical context rather than infrastructure.
| Field | Technical research | Executive education / Policy research |
|---|---|---|
| WHAT | Specific experiment or pipeline run | Specific analysis, case discussion, or decision made |
| WHY | Algorithmic or architectural reasoning | Strategic reasoning; stakeholder considerations; alternatives rejected |
| HOW | Code, tooling, methodology | Analytical framework applied, data sources used, process followed |
| ENVIRONMENT | Cloud specs, library versions, dataset checksums | Organizational context, data vintage, regulatory constraints, team composition |
| RESULTS | Pipeline output, error logs, metrics | Decision reached, recommendation made, consensus or disagreement recorded |
| QUESTIONS | Open technical hypotheses | Unresolved strategic questions, assumptions that need testing |
An official Executive Education MVAL variant with adapted field definitions is on the roadmap. See Roadmap ยง25.