Skip to content

Add smoke and snapshot test for two-fer #28

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 8, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions docs/smoke-tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Smoke Tests

When you're writing a _new analyzer_ it is very important that you add 🌫 smoke
tests. In general these will `execute` the analyzer without a `Runner` or any
other moving parts. This heuristic is great at detecting 🔥 fires, thereby `
ensuring that when an internal is changed, nothing breaks.

## Contents

Add a new file `test/analyzers/<slug>/smoke.ts` and copy the following template,
replacing `<slug>` with the actual slug.

```typescript
import { SlugAnalyzer } from '~src/analyzers/<slug>'
import { makeAnalyze } from '~test/helpers/smoke'

const analyze = makeAnalyze(() => new SlugAnalyzer())

describe('When running analysis on <slug>', () => {
it('can approve as optimal', async () => {

const solutionContent = `
// add a code example that SHOULD be approved as optimal
`.trim()

const output = await analyze(solutionContent)

expect(output.status).toBe('approve_as_optimal');
expect(output.comments.length).toBe(0);
})
})
```

Add additional test cases for `approve_with_comment` and
`disapprove_with_comment`, if your analyzer can actually output those. If you
have explicit code that should always return `refer_to_mentor` add it to.

**Note**: This is not the place to add an exhaustive test of inputs to outputs.
It's merely trying to detect when one of the known cases changes!
60 changes: 60 additions & 0 deletions docs/snapshot-tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Snapshot Tests

When maintaining an `analyzer`, you can use the `batch` binary in order to run
the analyzer on all the fixtures we have.

```bash
bin/batch.sh two-fer
```

The above command generates `analysis.json` for all the `two-fer` fixtures, as
located in `test/fixtures/two-fer/`. You can use these outputs to see if your
analyzer produces the right result.

Once you have established which solutions should generate which status and with
what commentary, it might be a good time to set up a 📸 _snapshot_ test. These
will record a snapshot of output and make sure they are the same, every single
time.

## Contents

Add a new file `test/analyzers/<slug>/snapshot.ts` and copy the following
template, replacing `<slug>` with the actual slug and entering **1 to 20** (more
is better, try to have at least 20 per status) fixture numbers to be tested.

```typescript
import { SlugAnalyzer } from '~src/analyzers/<slug>'
import { makeTestGenerator } from '~test/helpers/snapshot'

const snapshotTestsGenerator = makeTestGenerator(
'<slug>',
() => new SlugAnalyzer()
)

describe('When running analysis on two-fer fixtures', () => {
snapshotTestsGenerator('approve_as_optimal', [
// <fixture numbers>
])
snapshotTestsGenerator('approve_with_comment', [])
snapshotTestsGenerator('disapprove_with_comment', [])
snapshotTestsGenerator('refer_to_mentor', [])
})

```

**Note**: This is not the place to add an exhaustive test of inputs to outputs.
It's merely trying to detect when one of the known cases changes!

## Initial run

After you've added this test file, run the test suite (`yarn test`). It will
generate a snapshot file. The `snapshotTestsGenerator` also validates the output
status, just in case you entered the fixture number in the wrong category.

## Verification and updating

If one of fixtures' output changes, the test suite will fail.

1. Go through each failure manually and validate that it's as expected
1. Run `yarn test -u` to update the snapshots
1. Commit the changed snapshot file (at `test/analyzers/<slug>/__snapshots__/`)
Loading