Compare commits

...

67 Commits

Author SHA1 Message Date
Cursor Agent 65de9dd18c ci(release): drop unsupported windows/arm targets
Co-authored-by: Thomas Pelletier <thomas@pelletier.dev>
2026-03-03 06:15:04 +00:00
dependabot[bot] b3575580f9 build(deps): bump goreleaser/goreleaser-action from 6 to 7 (#1035) 2026-03-03 00:47:47 -05:00
dependabot[bot] a0be52f4c1 build(deps): bump actions/upload-artifact from 6 to 7 (#1036) 2026-03-03 00:47:35 -05:00
Thomas Pelletier 316bfc66a4 Support Unmarshaler interface for tables and array tables (#1027)
Fixes #873

Extend the unstable.Unmarshaler interface support to work with tables
and array tables, not just single values.

When a type implementing unstable.Unmarshaler is the target of a table
(e.g., [table] or [[array]]), the UnmarshalTOML method receives a
synthetic InlineTable node containing all the key-value pairs belonging
to that table.

Key changes:
- Add handleKeyValuesUnmarshaler to collect and process table content
- Add copyExpressionNodes to deep-copy AST nodes for synthetic tables
- Add helper functions in unstable/ast.go for node manipulation
- Update documentation for EnableUnmarshalerInterface
- Add comprehensive tests for table and array table unmarshaling

* Implement bytes-based Unmarshaler interface for tables and arrays (#873)

This change brings back support for the unstable.Unmarshaler interface
for tables and array tables, addressing issue #873.

Key changes:
- Changed UnmarshalTOML signature from (*Node) to ([]byte) to provide
  raw TOML bytes instead of AST nodes
- Added RawMessage type (similar to json.RawMessage) for capturing raw
  TOML bytes for later processing
- Updated handleKeyValuesUnmarshaler to reconstruct key-value lines
  from the parsed keys and raw value bytes
- Added support for slice types implementing Unmarshaler (e.g., RawMessage)
- Removed unused AST helper functions from unstable/ast.go

The bytes-based interface allows users to:
- Get raw TOML bytes for custom parsing
- Delay TOML decoding using RawMessage
- Implement custom unmarshaling logic for complex types

Tests added for:
- Table unmarshaler with various scenarios
- Array table unmarshaler
- Split tables (same parent defined in multiple places)
- RawMessage usage
- Nested tables and mixed regular fields

* Fix lint issues and improve test coverage for Unmarshaler interface

- Apply De Morgan's law in keyNeedsQuoting to satisfy staticcheck QF1001
- Remove unused splitTableUnmarshaler type from test
- Fix unused parameter lint warning in errorUnmarshaler873
- Add test for quoted keys that need special handling
- Add test for error propagation from UnmarshalTOML
- Update customTable873 parser to handle quoted keys properly

Coverage improved:
- handleKeyValuesUnmarshaler: 80.0% -> 93.3%
- keyNeedsQuoting: 66.7% -> 83.3%
- Overall main package: 97.2% -> 97.5%

* Add test for dotted keys to improve coverage

Add TestIssue873_DottedKeys to test dotted key handling (e.g., sub.key = value)
in the Unmarshaler interface. This improves coverage for handleKeyValuesUnmarshaler
from 93.3% to 96.7%.

* Add double pointer test to achieve 100% coverage for handleKeyValues

Add TestIssue873_DoublePointerUnmarshaler to test pointer-to-pointer
to Unmarshaler types. This covers the pointer dereferencing loop in
handleKeyValues, bringing its coverage from 88% to 100%.

Total coverage: 97.4%

* Add Example tests and fix raw value extraction for boolean types

Add two godoc Example tests:
- ExampleDecoder_EnableUnmarshalerInterface_dynamicConfig: shows dynamic
  unmarshaling based on a type field
- ExampleDecoder_EnableUnmarshalerInterface_rawMessage: demonstrates
  RawMessage usage for deferred parsing

Fix handleKeyValuesUnmarshaler to handle values where Raw.Length == 0
(like boolean types) by using value.Data as fallback.

* Preserve original formatting in Unmarshaler by using raw byte ranges

Instead of reconstructing key-value lines from parsed components, now
uses the original raw bytes from the document. This preserves:
- Whitespace around '=' (e.g., "key   =   value")
- String quoting style (basic vs literal)
- Number formats (hex, octal, binary)
- Inline table formatting

Changes:
- Add Raw range tracking to KeyValue expressions in parseKeyval
- Update handleKeyValuesUnmarshaler to use expr.Raw directly
- Remove keyNeedsQuoting helper (no longer needed)
- Add TestIssue873_FormattingPreservation test
- Update expected output in ExampleParser_comments

* Prevent test matrix from canceling on first failure

Add fail-fast: false to the test workflow strategy so that all
OS/Go version combinations continue running even if one fails.
This provides better visibility into which specific combinations
have issues.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-02-11 09:57:23 -05:00
Thomas Pelletier 2edc61f171 Fix panic when unmarshaling datetime values to incompatible types (#1028) (#1029)
Return a type mismatch error instead of panicking when datetime values
(DateTime, LocalDate, LocalTime, LocalDateTime) are unmarshaled into
incompatible Go types. This makes the decoder safer for processing
untrusted TOML input.

https://claude.ai/code/session_011jwvtDS5M2KncLrqJpgMr5

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-23 22:04:40 -05:00
Thomas Pelletier 4a1b05ca08 UnmarshalText fallbacks to struct unmarshaling for tables and arrays (#1026)
When a type implements encoding.TextUnmarshaler, the unmarshaler now
skips calling UnmarshalText for Array and InlineTable TOML values.
This allows types to support both:
- Simple string values via UnmarshalText
- Structured table values via field-by-field unmarshaling

Previously, UnmarshalText was called unconditionally, which prevented
proper struct unmarshaling when the TOML value was a table or array
of tables.

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-09 13:46:38 -05:00
Thomas Pelletier 003aa0993b Fix nil pointer map values not being marshaled (#1025)
When marshaling a map with nil pointer values, the keys were being
silently dropped, breaking round-trip fidelity. For example:

    map[string]*struct{}{"foo": nil}

Would produce an empty TOML document instead of "[foo]".

This change converts nil pointer values in maps to their zero values
(consistent with how nil pointers in slices are handled), allowing the
keys to be preserved as empty tables.

Nil interface values (map[string]any{"foo": nil}) are still skipped
since there's no type information to derive a zero value.

Fixes #975

Also, pin golangci-lint version to v2.8.0 in CI and document in AGENTS.md

- Explicitly set golangci-lint version in lint.yml to ensure consistent
  behavior across CI runs
- Update AGENTS.md with instructions to use the same linter version locally

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-09 11:08:31 -05:00
dependabot[bot] 84d730b6c4 build(deps): bump golangci/golangci-lint-action from 8 to 9 (#1022) 2026-01-05 21:23:56 -05:00
dependabot[bot] 97bd897177 build(deps): bump actions/setup-go from 5 to 6 (#1023) 2026-01-05 21:23:35 -05:00
dependabot[bot] 7924b1816f build(deps): bump actions/checkout from 5 to 6 (#1024) 2026-01-05 21:23:15 -05:00
Thomas Pelletier 2a07b6d9db Update to Go 1.25 (#1018)
Update CI workflows to test against Go 1.24 and 1.25, and use Go 1.25 for
coverage and release builds.

## Benchstat Report: Go 1.24 vs Go 1.25

Benchmark comparison between Go 1.24.7 and Go 1.25.1 (10 runs each):

### Execution Time (sec/op)

| Benchmark | Go 1.24 | Go 1.25 | Delta |
|-----------|---------|---------|-------|
| UnmarshalDataset/config | 26.25ms | 26.00ms | ~ (p=0.280) |
| UnmarshalDataset/canada | 88.71ms | 84.94ms | **-4.26%**  |
| UnmarshalDataset/citm_catalog | 33.71ms | 34.06ms | ~ (p=0.684) |
| UnmarshalDataset/twitter | 17.19ms | 17.33ms | ~ (p=0.971) |
| UnmarshalDataset/code | 107.4ms | 108.1ms | ~ (p=0.393) |
| UnmarshalDataset/example | 237.9µs | 251.3µs | +5.64% |
| Unmarshal/SimpleDocument/struct | 872.3ns | 848.9ns | ~ (p=0.165) |
| Unmarshal/SimpleDocument/map | 1.191µs | 1.278µs | +7.31% |
| Unmarshal/ReferenceFile/struct | 57.14µs | 57.95µs | ~ (p=0.089) |
| Unmarshal/ReferenceFile/map | 87.89µs | 92.88µs | +5.69% |
| Unmarshal/HugoFrontMatter | 16.06µs | 15.95µs | ~ (p=0.529) |
| Marshal/SimpleDocument/struct | 536.5ns | 563.5ns | +5.03% |
| Marshal/SimpleDocument/map | 651.0ns | 675.1ns | +3.72% |
| Marshal/ReferenceFile/struct | 44.63µs | 50.84µs | +13.91% |
| Marshal/ReferenceFile/map | 51.58µs | 57.06µs | +10.61% |
| Marshal/HugoFrontMatter | 10.04µs | 10.57µs | +5.27% |
| **geomean** | 140.6µs | 145.1µs | +3.18% |

### Summary

- Notable improvement: UnmarshalDataset/canada shows a 4.26% speedup
- Memory allocation and allocation counts remain identical
- Some marshal operations show slight slowdowns (likely Go runtime changes)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-04 13:59:18 -05:00
Thomas Pelletier 692b98560b Support custom IsZero() methods with omitzero tag (#1020)
The omitzero tag now respects custom IsZero() methods on types,
similar to how encoding/json handles this. Previously, only
reflect.Value.IsZero() was used, which ignores user-defined
implementations.

Fixes #1003

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-04 13:58:47 -05:00
Thomas Pelletier 99cd40b175 Reject leap seconds to prevent year overflow (#1019)
Go's time.Date() normalizes leap seconds (second=60) by adding 1 minute.
When parsing the maximum valid TOML date 9999-12-31 23:59:60z, this causes
the year to overflow to 10000, which exceeds the valid TOML year range
(0000-9999) and breaks round-trip serialization.

The fix rejects leap seconds (second > 59) during parsing. This is
consistent with the resolution of issue #913 which determined that
emitting an error is less surprising than silently normalizing leap
seconds.

Fixes #1015

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-04 13:40:19 -05:00
Thomas Pelletier 3aaf147e3e Remove unsafe package usage (#1021)
Removes all unsafe operations from go-toml, making the codebase
fully safe Go code. The internal/danger package that contained
unsafe operations has been deleted.

Changes:
- Replace pointer-based node navigation with index-based navigation
- Node.next and Node.child now store absolute indices into the
  backing nodes slice instead of relative offsets
- Add nodes pointer to Node and Iterator for safe navigation
- Replace danger.TypeID with reflect.Type for cache keys
- Delete internal/danger package entirely

Performance overhead is under 10% compared to the unsafe version,
which is acceptable for the safety and maintainability benefits.

[Cursor][claude-sonnet-4-20250514]
2026-01-04 13:16:47 -05:00
Nathan Baulch a675c6b3e2 Upgrade to golangci-lint v2 (#1008) 2026-01-04 09:54:29 -05:00
Thomas Pelletier 9702fae9b8 Add AGENTS.md for AI agent contribution guidelines (#1017)
This file provides a concise summary of the contribution guidelines
from CONTRIBUTING.md, specifically tailored for AI agents working on
the codebase. It covers testing requirements, backward compatibility,
performance considerations, and code style expectations.

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-03 21:41:34 -05:00
Alexander Hecke 3cf1eb2312 improve Unmarshaling documentation (#1016) 2026-01-03 21:12:35 -05:00
Nathan Baulch 2af3554f90 Update toml-test to v1.6.0 (#1007) 2026-01-03 20:45:06 -05:00
dependabot[bot] 180c6ba2ba build(deps): bump actions/setup-go from 5 to 6 (#1002)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-03 20:43:53 -05:00
dependabot[bot] dafc4173ef build(deps): bump github/codeql-action from 3 to 4 (#1006)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3 to 4.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-03 20:43:43 -05:00
dependabot[bot] f1a83be671 build(deps): bump actions/upload-artifact from 4 to 6 (#1011)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4 to 6.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4...v6)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-03 20:43:33 -05:00
dependabot[bot] 5aeb70b3f0 build(deps): bump actions/checkout from 5 to 6 (#1010)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-03 20:43:20 -05:00
W. Michael Petullo 8384a5683c Use constant format strings with Printf-like functions (#1013)
Recent versions of Go object to the use of non-constant variables a
format strings. This commit fixes errors like this:

cli.go:26:47: non-constant format string in call to fmt.Fprintf

Signed-off-by: W. Michael Petullo <mike@flyn.org>
2026-01-03 20:42:58 -05:00
Étienne BERSAC 4369957cb4 Unwrap strict errors (#1012) 2025-12-21 16:20:24 +01:00
dependabot[bot] a0e8464967 build(deps): bump actions/checkout from 4 to 5 (#1001)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-26 09:53:08 +02:00
Nathan Baulch c57d0d559f Add omitzero tag support (#998) 2025-08-25 08:06:48 +02:00
Thomas Pelletier 644602b845 Script to test all versions of go (#1000) 2025-08-24 12:40:29 +02:00
Nathan Baulch 36df8eef6e General cleanup (#999) 2025-08-24 12:18:46 +02:00
Thomas Pelletier 18a2148713 Handle array table into an empty slice (#997)
Fix #995
2025-08-21 12:05:41 +02:00
Thomas Pelletier bc9958322f Add missing UnmarshalTOML call (#996)
Fixes #994.
2025-08-21 10:39:23 +02:00
Dustin Spicuzza 6d56ac8027 marshal: don't escape quotes unnecessarily (#991)
Only 3 consecutive quotation marks need to be quoted. We choose to quote
all quotation marks in a sequence if there are 3 or more consecutive
present.

Fixes #990

---------

Co-authored-by: Thomas Pelletier <thomas@pelletier.dev>
2025-08-21 08:19:16 +02:00
dependabot[bot] 098464b61b build(deps): bump actions/checkout from 4 to 5 (#993)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-21 08:10:55 +02:00
Oleksandr Redko 85e2448ce5 refactor: Simplify t.Fatalf (#984) 2025-05-10 15:14:34 -04:00
Thomas Pelletier ee07c9203b Update to go 1.24 (#982) 2025-04-07 07:11:38 -04:00
Alex Mikitik 014204cfb7 Replace stretchr/testify with an internal test suite (#981)
As recommended, an `internal/assert` package was added with a reduced set of assertions. All tests were then refactored to use the internal assertions. When more complex assertions were used, they have been rewritten using logic and the simplified assertions.

Fancy formatting for failures was omitted. The `internal/assert/assertions.diff` function could be overwritten for better formatting. That is where diff libraries are used in other test suites.

Refs: #872

Co-authored-by: Alex Mikitik <alex.mikitik@oracle.com>
2025-04-07 06:36:37 -04:00
Oleksandr Redko 923b2ab478 Fix typos in comments and tests (#972) 2024-11-16 11:30:13 -05:00
Thomas Pelletier af236b689f Fix goreleaser deprecated attribute name (#964)
https://goreleaser.com/deprecations/#snapshotname_template
2024-08-23 13:56:48 -04:00
Thomas Pelletier b730b2be5d Bump testing to go 1.23 (#961) 2024-08-17 16:26:05 -04:00
vito a437caafe5 Fix reflect.Pointer backward compatibility (#956) 2024-08-17 16:07:56 -04:00
guoguangwu be6c57be30 Fix readme typo(#951) 2024-08-17 15:56:40 -04:00
Daniel Weiße d55304782e Allow int, uint, and floats as map keys (#958)
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
2024-08-17 15:44:21 -04:00
Daniel Weiße 0977c05dd5 Update goreleaser action to v6 and set goreleaser binary to v2 (#959)
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
2024-08-17 15:40:55 -04:00
Daniel Martí bccd6e48f4 allocate unstable.Parser as part of decoder (#953)
This way, calls to Unmarshal or Decoder.Decode allocate once
at the start rather than twice.

                                │    old     │               new                │
                                │ allocs/op  │ allocs/op   vs base              │
    Unmarshal/HugoFrontMatter-8   141.0 ± 0%   140.0 ± 0%  -0.71% (p=0.002 n=6)
2024-05-24 14:49:06 -04:00
Daniel Martí 9b890cf9c5 go.mod: bump minimum and language to 1.21 (#949)
* go.mod: bump minimum and language to 1.21

CI only tests Go 1.21 and 1.22, and older versions of Go are no longer
getting any bug or security fixes, so advertise that we only support
Go 1.21 or later via go.mod.

While here, ensure the module is tidy and resolve deprecation warnings,
and remove now-unnecessary Go version build tags.

* replace sort.Slice with slices.SortFunc

The latter is more efficient, and allocates less, since sort.Slice
needs to go through sort.Interface which causes allocations.

    goos: linux
    goarch: amd64
    pkg: github.com/pelletier/go-toml/v2/benchmark
    cpu: AMD Ryzen 7 PRO 5850U with Radeon Graphics
                              │     old     │                new                 │
                              │   sec/op    │   sec/op     vs base               │
    Marshal/HugoFrontMatter-8   7.612µ ± 1%   6.730µ ± 1%  -11.59% (p=0.002 n=6)

                              │     old      │                 new                 │
                              │     B/s      │     B/s       vs base               │
    Marshal/HugoFrontMatter-8   65.52Mi ± 1%   74.11Mi ± 1%  +13.11% (p=0.002 n=6)

                              │     old      │                new                 │
                              │     B/op     │     B/op      vs base              │
    Marshal/HugoFrontMatter-8   5.672Ki ± 0%   5.266Ki ± 0%  -7.16% (p=0.002 n=6)

                              │    old     │                new                │
                              │ allocs/op  │ allocs/op   vs base               │
    Marshal/HugoFrontMatter-8   85.00 ± 0%   73.00 ± 0%  -14.12% (p=0.002 n=6)
2024-05-24 10:58:39 -04:00
大可 a3d5a0bb53 fix: sync pool race condition (#947) 2024-04-29 06:02:54 -04:00
Daniel Weiße d00d2cca6e Fix indentation of custom type arrays (#944)
Signed-off-by: Daniel Weiße <dw@edgeless.systems>
2024-04-12 10:42:12 -04:00
dependabot[bot] 86608d7fca build(deps): bump github/codeql-action from 2 to 3 (#919)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 2 to 3.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-19 13:24:46 -04:00
dependabot[bot] 4a1877957a build(deps): bump actions/setup-go from 4 to 5 (#916)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-19 13:24:37 -04:00
dependabot[bot] 3021d6d033 build(deps): bump actions/upload-artifact from 3 to 4 (#920)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-19 13:24:24 -04:00
Thomas Pelletier 32788f26f8 Update release instructions (#941) 2024-03-19 12:47:39 -04:00
rszyma 8ed6d131eb Decode: unstable/Unmarshal interface (#940)
Co-authored-by: Pavlos Karakalidis <pkarakal@pkarakal.com>
Co-authored-by: Thomas Pelletier <thomas@pelletier.codes>
2024-03-19 12:33:12 -04:00
dependabot[bot] 7dad87762a build(deps): bump github.com/stretchr/testify from 1.8.4 to 1.9.0 (#936)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.4 to 1.9.0.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.4...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-12 09:06:45 -04:00
Thomas Pelletier 2b69615b5d Go 1.22 support (#935) 2024-02-27 15:30:13 -05:00
Thomas Pelletier 06fb30bf2e Decode: fix reuse of slice for array tables (#934)
When decoding into a non-empty slice, it needs to be emptied so that only the
tables contained in the document are present in the resulting value.

Arrays are not impacted because their unmarshal offset is tracked separately.

Fixes #931
2024-02-27 15:28:49 -05:00
Thomas Pelletier 2e087bdf5f Run tests on macos/M1 (#929)
https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/
2024-01-30 19:15:50 -05:00
Thomas Pelletier caeb9f9631 Fix marshaler typos (#927) 2024-01-30 19:01:55 -05:00
Rdbo e7223fb40e fix: odd indentation in README (#928) 2024-01-30 19:01:43 -05:00
Jakub Wilk 05bedf36d8 Fix README typo (#925) 2024-01-25 15:21:33 -08:00
Daniel Graña f5486d590f Support encoding json.Number type (#923)
Co-authored-by: Thomas Pelletier <thomas@pelletier.codes>
2024-01-25 15:21:02 -08:00
Daniel Graña 2ca21fb7b4 Support encoding of pointers to embedded structs (#924) 2024-01-23 13:06:33 -05:00
Thomas Pelletier 34765b4a9e Fix unmarshaling of nested non-exported struct (#917)
Fixes #915
2023-12-11 14:17:49 -05:00
Moritz Poldrack 358c8d2c23 Use toml-test to generate tests (#911)
Fixes: #909
2023-10-26 12:05:02 -06:00
Martin Tournoij fd8d0bf4d9 Add cmd/gotoml-test-encoder (#907) 2023-10-23 14:40:44 -06:00
Thomas Pelletier a76e18e8c5 Fix benchmark script (#905) 2023-10-02 13:49:01 -04:00
dependabot[bot] dff0c128d0 build(deps): bump docker/login-action from 2 to 3 (#901)
Bumps [docker/login-action](https://github.com/docker/login-action) from 2 to 3.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-17 18:11:24 -04:00
Thomas Pelletier 3573ce3770 Update SECURITY.md
Remove placeholder.
2023-09-04 09:43:32 -04:00
dependabot[bot] ae933f2e2a build(deps): bump actions/checkout from 3 to 4 (#896)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-04 09:42:37 -04:00
70 changed files with 6244 additions and 1945 deletions
+1 -1
View File
@@ -19,7 +19,7 @@ jobs:
dry-run: false dry-run: false
language: go language: go
- name: Upload Crash - name: Upload Crash
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v7
if: failure() && steps.build.outcome == 'success' if: failure() && steps.build.outcome == 'success'
with: with:
name: artifacts name: artifacts
+5 -5
View File
@@ -35,11 +35,11 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v3 uses: actions/checkout@v6
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v2 uses: github/codeql-action/init@v4
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file. # If you wish to specify custom queries, you can do so here or in a config file.
@@ -47,10 +47,10 @@ jobs:
# Prefix the list here with "+" to use these queries and those in the config file. # Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main # queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java). # Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below) # If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild - name: Autobuild
uses: github/codeql-action/autobuild@v2 uses: github/codeql-action/autobuild@v4
# ️ Command-line programs to run using the OS shell. # ️ Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl # 📚 https://git.io/JvXDl
@@ -64,4 +64,4 @@ jobs:
# make release # make release
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2 uses: github/codeql-action/analyze@v4
+3 -3
View File
@@ -9,12 +9,12 @@ jobs:
runs-on: "ubuntu-latest" runs-on: "ubuntu-latest"
name: report name: report
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Setup go - name: Setup go
uses: actions/setup-go@v4 uses: actions/setup-go@v6
with: with:
go-version: "1.21" go-version: "1.25"
- name: Run tests with coverage - name: Run tests with coverage
run: ./ci.sh coverage -d "${GITHUB_BASE_REF-HEAD}" run: ./ci.sh coverage -d "${GITHUB_BASE_REF-HEAD}"
+36
View File
@@ -0,0 +1,36 @@
name: Go Versions Compatibility Test
on:
workflow_dispatch:
inputs:
go_versions:
description: 'Go versions to test (space-separated, e.g., "1.21 1.22 1.23")'
required: false
default: ''
type: string
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Run Go versions compatibility test
run: |
VERSIONS="${{ github.event.inputs.go_versions }}"
./test-go-versions.sh --output ./test-results $VERSIONS
- name: Upload test results
uses: actions/upload-artifact@v7
with:
name: go-versions-test-results
path: |
test-results/
retention-days: 30
+22
View File
@@ -0,0 +1,22 @@
name: lint
on:
pull_request:
branches:
- v2
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Setup go
uses: actions/setup-go@v6
with:
go-version: "1.24"
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v9
with:
version: v2.8.0
+7 -7
View File
@@ -16,24 +16,24 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v4 uses: actions/setup-go@v6
with: with:
go-version: "1.21" go-version: "1.25"
- name: Login to GitHub Container Registry - name: Login to GitHub Container Registry
uses: docker/login-action@v2 uses: docker/login-action@v3
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- name: Run GoReleaser - name: Run GoReleaser
uses: goreleaser/goreleaser-action@v3 uses: goreleaser/goreleaser-action@v7
with: with:
distribution: goreleaser distribution: goreleaser
version: latest version: '~> v2'
args: release ${{ inputs.args }} --rm-dist args: release ${{ inputs.args }} --clean
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+6 -5
View File
@@ -10,23 +10,24 @@ on:
jobs: jobs:
build: build:
strategy: strategy:
fail-fast: false
matrix: matrix:
os: [ 'ubuntu-latest', 'windows-latest', 'macos-latest' ] os: [ 'ubuntu-latest', 'windows-latest', 'macos-latest', 'macos-14' ]
go: [ '1.20', '1.21' ] go: [ '1.24', '1.25' ]
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
name: ${{ matrix.go }}/${{ matrix.os }} name: ${{ matrix.go }}/${{ matrix.os }}
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Setup go ${{ matrix.go }} - name: Setup go ${{ matrix.go }}
uses: actions/setup-go@v4 uses: actions/setup-go@v6
with: with:
go-version: ${{ matrix.go }} go-version: ${{ matrix.go }}
- name: Run unit tests - name: Run unit tests
run: go test -race ./... run: go test -race ./...
release-check: release-check:
if: ${{ github.ref != 'refs/heads/v2' }} if: ${{ github.ref != 'refs/heads/v2' }}
uses: pelletier/go-toml/.github/workflows/release.yml@v2 uses: ./.github/workflows/release.yml
with: with:
args: --snapshot args: --snapshot
+3 -1
View File
@@ -3,4 +3,6 @@ fuzz/
cmd/tomll/tomll cmd/tomll/tomll
cmd/tomljson/tomljson cmd/tomljson/tomljson
cmd/tomltestgen/tomltestgen cmd/tomltestgen/tomltestgen
dist dist
tests/
test-results
+33 -41
View File
@@ -1,84 +1,76 @@
[service] version = "2"
golangci-lint-version = "1.39.0"
[linters-settings.wsl]
allow-assign-and-anything = true
[linters-settings.exhaustive]
default-signifies-exhaustive = true
[linters] [linters]
disable-all = true default = "none"
enable = [ enable = [
"asciicheck", "asciicheck",
"bodyclose", "bodyclose",
"cyclop",
"deadcode",
"depguard",
"dogsled", "dogsled",
"dupl", "dupl",
"durationcheck", "durationcheck",
"errcheck", "errcheck",
"errorlint", "errorlint",
"exhaustive", "exhaustive",
# "exhaustivestruct",
"exportloopref",
"forbidigo", "forbidigo",
# "forcetypeassert",
"funlen",
"gci",
# "gochecknoglobals",
"gochecknoinits", "gochecknoinits",
"gocognit",
"goconst", "goconst",
"gocritic", "gocritic",
"gocyclo", "godoclint",
"godot",
"godox",
# "goerr113",
"gofmt",
"gofumpt",
"goheader", "goheader",
"goimports",
"golint",
"gomnd",
# "gomoddirectives",
"gomodguard", "gomodguard",
"goprintffuncname", "goprintffuncname",
"gosec", "gosec",
"gosimple",
"govet", "govet",
# "ifshort",
"importas", "importas",
"ineffassign", "ineffassign",
"lll", "lll",
"makezero", "makezero",
"mirror",
"misspell", "misspell",
"nakedret", "nakedret",
"nestif",
"nilerr", "nilerr",
# "nlreturn",
"noctx", "noctx",
"nolintlint", "nolintlint",
#"paralleltest", "perfsprint",
"prealloc", "prealloc",
"predeclared", "predeclared",
"revive", "revive",
"rowserrcheck", "rowserrcheck",
"sqlclosecheck", "sqlclosecheck",
"staticcheck", "staticcheck",
"structcheck",
"stylecheck",
# "testpackage",
"thelper", "thelper",
"tparallel", "tparallel",
"typecheck",
"unconvert", "unconvert",
"unparam", "unparam",
"unused", "unused",
"varcheck", "usetesting",
"wastedassign", "wastedassign",
"whitespace", "whitespace",
# "wrapcheck", ]
# "wsl"
[linters.settings.exhaustive]
default-signifies-exhaustive = true
[linters.settings.lll]
line-length = 150
[[linters.exclusions.rules]]
path = ".test.go"
linters = ["goconst", "gosec"]
[[linters.exclusions.rules]]
path = "main.go"
linters = ["forbidigo"]
[[linters.exclusions.rules]]
path = "internal"
linters = ["revive"]
text = "(exported|indent-error-flow): "
[formatters]
enable = [
"gci",
"gofmt",
"gofumpt",
"goimports",
] ]
+2 -4
View File
@@ -1,3 +1,4 @@
version: 2
before: before:
hooks: hooks:
- go mod tidy - go mod tidy
@@ -21,7 +22,6 @@ builds:
- linux_riscv64 - linux_riscv64
- windows_amd64 - windows_amd64
- windows_arm64 - windows_arm64
- windows_arm
- darwin_amd64 - darwin_amd64
- darwin_arm64 - darwin_arm64
- id: tomljson - id: tomljson
@@ -41,7 +41,6 @@ builds:
- linux_riscv64 - linux_riscv64
- windows_amd64 - windows_amd64
- windows_arm64 - windows_arm64
- windows_arm
- darwin_amd64 - darwin_amd64
- darwin_arm64 - darwin_arm64
- id: jsontoml - id: jsontoml
@@ -61,7 +60,6 @@ builds:
- linux_arm - linux_arm
- windows_amd64 - windows_amd64
- windows_arm64 - windows_arm64
- windows_arm
- darwin_amd64 - darwin_amd64
- darwin_arm64 - darwin_arm64
universal_binaries: universal_binaries:
@@ -112,7 +110,7 @@ dockers:
checksum: checksum:
name_template: 'sha256sums.txt' name_template: 'sha256sums.txt'
snapshot: snapshot:
name_template: "{{ incpatch .Version }}-next" version_template: "{{ incpatch .Version }}-next"
release: release:
github: github:
owner: pelletier owner: pelletier
+64
View File
@@ -0,0 +1,64 @@
# Agent Guidelines for go-toml
This file provides guidelines for AI agents contributing to go-toml. All agents must follow these rules derived from [CONTRIBUTING.md](./CONTRIBUTING.md).
## Project Overview
go-toml is a TOML library for Go. The goal is to provide an easy-to-use and efficient TOML implementation that gets the job done without getting in the way.
## Code Change Rules
### Backward Compatibility
- **No backward-incompatible changes** unless explicitly discussed and approved
- Avoid breaking people's programs unless absolutely necessary
### Testing Requirements
- **All bug fixes must include regression tests**
- **All new code must be tested**
- Run tests before submitting: `go test -race ./...`
- Test coverage must not decrease. Check with:
```bash
go test -covermode=atomic -coverprofile=coverage.out
go tool cover -func=coverage.out
```
- All lines of code touched by changes should be covered by tests
### Performance Requirements
- go-toml aims to stay efficient; avoid performance regressions
- Run benchmarks to verify: `go test ./... -bench=. -count=10`
- Compare results using [benchstat](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat)
### Documentation
- New features or feature extensions must include documentation
- Documentation lives in [README.md](./README.md) and throughout source code
### Code Style
- Follow existing code format and structure
- Code must pass `go fmt`
- Code must pass linting with the same golangci-lint version as CI (see version in `.github/workflows/lint.yml`):
```bash
# Install specific version (check lint.yml for current version)
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/HEAD/install.sh | sh -s -- -b $(go env GOPATH)/bin <version>
# Run linter
golangci-lint run ./...
```
### Commit Messages
- Commit messages must explain **why** the change is needed
- Keep messages clear and informative even if details are in the PR description
## Pull Request Checklist
Before submitting:
1. Tests pass (`go test -race ./...`)
2. No backward-incompatible changes (unless discussed)
3. Relevant documentation added/updated
4. No performance regression (verify with benchmarks)
5. Title is clear and understandable for changelog
+60 -21
View File
@@ -33,7 +33,7 @@ The documentation is present in the [README][readme] and thorough the source
code. On release, it gets updated on [pkg.go.dev][pkg.go.dev]. To make a change code. On release, it gets updated on [pkg.go.dev][pkg.go.dev]. To make a change
to the documentation, create a pull request with your proposed changes. For to the documentation, create a pull request with your proposed changes. For
simple changes like that, the easiest way to go is probably the "Fork this simple changes like that, the easiest way to go is probably the "Fork this
project and edit the file" button on Github, displayed at the top right of the project and edit the file" button on GitHub, displayed at the top right of the
file. Unless it's a trivial change (for example a typo), provide a little bit of file. Unless it's a trivial change (for example a typo), provide a little bit of
context in your pull request description or commit message. context in your pull request description or commit message.
@@ -92,6 +92,48 @@ However, given GitHub's new policy to _not_ run Actions on pull requests until a
maintainer clicks on button, it is highly recommended that you run them locally maintainer clicks on button, it is highly recommended that you run them locally
as you make changes. as you make changes.
### Test across Go versions
The repository includes tooling to test go-toml across multiple Go versions
(1.11 through 1.25) both locally and in GitHub Actions.
#### Local testing with Docker
Prerequisites: Docker installed and running, Bash shell, `rsync` command.
```bash
# Test all Go versions in parallel (default)
./test-go-versions.sh
# Test specific versions
./test-go-versions.sh 1.21 1.22 1.23
# Test sequentially (slower but uses less resources)
./test-go-versions.sh --sequential
# Verbose output with custom results directory
./test-go-versions.sh --verbose --output ./my-results 1.24 1.25
# Show all options
./test-go-versions.sh --help
```
The script creates Docker containers for each Go version and runs the full test
suite. Results are saved to a `test-results/` directory with individual logs and
a comprehensive summary report.
The script only exits with a non-zero status code if either of the two most
recent Go versions fail.
#### GitHub Actions testing (maintainers)
1. Go to the **Actions** tab in the GitHub repository
2. Select **"Go Versions Compatibility Test"** from the workflow list
3. Click **"Run workflow"**
4. Optionally customize:
- **Go versions**: Space-separated list (e.g., `1.21 1.22 1.23`)
- **Execution mode**: Parallel (faster) or sequential (more stable)
### Check coverage ### Check coverage
We use `go tool cover` to compute test coverage. Most code editors have a way to We use `go tool cover` to compute test coverage. Most code editors have a way to
@@ -111,7 +153,7 @@ code lowers the coverage.
Go-toml aims to stay efficient. We rely on a set of scenarios executed with Go's Go-toml aims to stay efficient. We rely on a set of scenarios executed with Go's
builtin benchmark systems. Because of their noisy nature, containers provided by builtin benchmark systems. Because of their noisy nature, containers provided by
Github Actions cannot be reliably used for benchmarking. As a result, you are GitHub Actions cannot be reliably used for benchmarking. As a result, you are
responsible for checking that your changes do not incur a performance penalty. responsible for checking that your changes do not incur a performance penalty.
You can run their following to execute benchmarks: You can run their following to execute benchmarks:
@@ -165,25 +207,22 @@ Checklist:
### New release ### New release
1. Decide on the next version number. Use semver. 1. Decide on the next version number. Use semver. Review commits since last
2. Generate release notes using [`gh`][gh]. Example: version to assess.
``` 2. Tag release. For example:
$ gh api -X POST \ ```
-F tag_name='v2.0.0-beta.5' \ git checkout v2
-F target_commitish='v2' \ git pull
-F previous_tag_name='v2.0.0-beta.4' \ git tag v2.2.0
--jq '.body' \ git push --tags
repos/pelletier/go-toml/releases/generate-notes ```
``` 3. CI automatically builds a draft GitHub release. Review it and edit as
3. Look for "Other changes". That would indicate a pull request not labeled necessary. Look for "Other changes". That would indicate a pull request not
properly. Tweak labels and pull request titles until changelog looks good for labeled properly. Tweak labels and pull request titles until changelog looks
users. good for users.
4. [Draft new release][new-release]. 4. Check "create discussion" box, in the "Releases" category.
5. Fill tag and target with the same value used to generate the changelog. 5. If new version is an alpha or beta only, check pre-release box.
6. Set title to the new tag value.
7. Paste the generated changelog.
8. Check "create discussion", in the "Releases" category.
9. Check pre-release if new version is an alpha or beta.
[issues-tracker]: https://github.com/pelletier/go-toml/issues [issues-tracker]: https://github.com/pelletier/go-toml/issues
[bug-report]: https://github.com/pelletier/go-toml/issues/new?template=bug_report.md [bug-report]: https://github.com/pelletier/go-toml/issues/new?template=bug_report.md
+120 -59
View File
@@ -98,16 +98,20 @@ Given the following struct, let's see how to read it and write it as TOML:
```go ```go
type MyConfig struct { type MyConfig struct {
Version int Version int
Name string Name string
Tags []string Tags []string
} }
``` ```
### Unmarshaling ### Unmarshaling
[`Unmarshal`][unmarshal] reads a TOML document and fills a Go structure with its [`Unmarshal`][unmarshal] reads a TOML document and fills a Go structure with its
content. For example: content.
Note that the struct variable names are _capitalized_, while the variables in the toml document are _lowercase_.
For example:
```go ```go
doc := ` doc := `
@@ -119,7 +123,7 @@ tags = ["go", "toml"]
var cfg MyConfig var cfg MyConfig
err := toml.Unmarshal([]byte(doc), &cfg) err := toml.Unmarshal([]byte(doc), &cfg)
if err != nil { if err != nil {
panic(err) panic(err)
} }
fmt.Println("version:", cfg.Version) fmt.Println("version:", cfg.Version)
fmt.Println("name:", cfg.Name) fmt.Println("name:", cfg.Name)
@@ -133,6 +137,62 @@ fmt.Println("tags:", cfg.Tags)
[unmarshal]: https://pkg.go.dev/github.com/pelletier/go-toml/v2#Unmarshal [unmarshal]: https://pkg.go.dev/github.com/pelletier/go-toml/v2#Unmarshal
Here is an example using tables with some simple nesting:
```go
doc := `
age = 45
fruits = ["apple", "pear"]
# these are very important!
[my-variables]
first = 1
second = 0.2
third = "abc"
# this is not so important.
[my-variables.b]
bfirst = 123
`
var Document struct {
Age int
Fruits []string
Myvariables struct {
First int
Second float64
Third string
B struct {
Bfirst int
}
} `toml:"my-variables"`
}
err := toml.Unmarshal([]byte(doc), &Document)
if err != nil {
panic(err)
}
fmt.Println("age:", Document.Age)
fmt.Println("fruits:", Document.Fruits)
fmt.Println("my-variables.first:", Document.Myvariables.First)
fmt.Println("my-variables.second:", Document.Myvariables.Second)
fmt.Println("my-variables.third:", Document.Myvariables.Third)
fmt.Println("my-variables.B.Bfirst:", Document.Myvariables.B.Bfirst)
// Output:
// age: 45
// fruits: [apple pear]
// my-variables.first: 1
// my-variables.second: 0.2
// my-variables.third: abc
// my-variables.B.Bfirst: 123
```
### Marshaling ### Marshaling
[`Marshal`][marshal] is the opposite of Unmarshal: it represents a Go structure [`Marshal`][marshal] is the opposite of Unmarshal: it represents a Go structure
@@ -140,14 +200,14 @@ as a TOML document:
```go ```go
cfg := MyConfig{ cfg := MyConfig{
Version: 2, Version: 2,
Name: "go-toml", Name: "go-toml",
Tags: []string{"go", "toml"}, Tags: []string{"go", "toml"},
} }
b, err := toml.Marshal(cfg) b, err := toml.Marshal(cfg)
if err != nil { if err != nil {
panic(err) panic(err)
} }
fmt.Println(string(b)) fmt.Println(string(b))
@@ -175,17 +235,17 @@ the AST level. See https://pkg.go.dev/github.com/pelletier/go-toml/v2/unstable.
Execution time speedup compared to other Go TOML libraries: Execution time speedup compared to other Go TOML libraries:
<table> <table>
<thead> <thead>
<tr><th>Benchmark</th><th>go-toml v1</th><th>BurntSushi/toml</th></tr> <tr><th>Benchmark</th><th>go-toml v1</th><th>BurntSushi/toml</th></tr>
</thead> </thead>
<tbody> <tbody>
<tr><td>Marshal/HugoFrontMatter-2</td><td>1.9x</td><td>1.9x</td></tr> <tr><td>Marshal/HugoFrontMatter-2</td><td>1.9x</td><td>2.2x</td></tr>
<tr><td>Marshal/ReferenceFile/map-2</td><td>1.7x</td><td>1.8x</td></tr> <tr><td>Marshal/ReferenceFile/map-2</td><td>1.7x</td><td>2.1x</td></tr>
<tr><td>Marshal/ReferenceFile/struct-2</td><td>2.2x</td><td>2.5x</td></tr> <tr><td>Marshal/ReferenceFile/struct-2</td><td>2.2x</td><td>3.0x</td></tr>
<tr><td>Unmarshal/HugoFrontMatter-2</td><td>2.9x</td><td>2.9x</td></tr> <tr><td>Unmarshal/HugoFrontMatter-2</td><td>2.9x</td><td>2.7x</td></tr>
<tr><td>Unmarshal/ReferenceFile/map-2</td><td>2.6x</td><td>2.9x</td></tr> <tr><td>Unmarshal/ReferenceFile/map-2</td><td>2.6x</td><td>2.7x</td></tr>
<tr><td>Unmarshal/ReferenceFile/struct-2</td><td>4.4x</td><td>5.3x</td></tr> <tr><td>Unmarshal/ReferenceFile/struct-2</td><td>4.6x</td><td>5.1x</td></tr>
</tbody> </tbody>
</table> </table>
<details><summary>See more</summary> <details><summary>See more</summary>
<p>The table above has the results of the most common use-cases. The table below <p>The table above has the results of the most common use-cases. The table below
@@ -193,22 +253,22 @@ contains the results of all benchmarks, including unrealistic ones. It is
provided for completeness.</p> provided for completeness.</p>
<table> <table>
<thead> <thead>
<tr><th>Benchmark</th><th>go-toml v1</th><th>BurntSushi/toml</th></tr> <tr><th>Benchmark</th><th>go-toml v1</th><th>BurntSushi/toml</th></tr>
</thead> </thead>
<tbody> <tbody>
<tr><td>Marshal/SimpleDocument/map-2</td><td>1.8x</td><td>2.9x</td></tr> <tr><td>Marshal/SimpleDocument/map-2</td><td>1.8x</td><td>2.7x</td></tr>
<tr><td>Marshal/SimpleDocument/struct-2</td><td>2.7x</td><td>4.2x</td></tr> <tr><td>Marshal/SimpleDocument/struct-2</td><td>2.7x</td><td>3.8x</td></tr>
<tr><td>Unmarshal/SimpleDocument/map-2</td><td>4.5x</td><td>3.1x</td></tr> <tr><td>Unmarshal/SimpleDocument/map-2</td><td>3.8x</td><td>3.0x</td></tr>
<tr><td>Unmarshal/SimpleDocument/struct-2</td><td>6.2x</td><td>3.9x</td></tr> <tr><td>Unmarshal/SimpleDocument/struct-2</td><td>5.6x</td><td>4.1x</td></tr>
<tr><td>UnmarshalDataset/example-2</td><td>3.1x</td><td>3.5x</td></tr> <tr><td>UnmarshalDataset/example-2</td><td>3.0x</td><td>3.2x</td></tr>
<tr><td>UnmarshalDataset/code-2</td><td>2.3x</td><td>3.1x</td></tr> <tr><td>UnmarshalDataset/code-2</td><td>2.3x</td><td>2.9x</td></tr>
<tr><td>UnmarshalDataset/twitter-2</td><td>2.5x</td><td>2.6x</td></tr> <tr><td>UnmarshalDataset/twitter-2</td><td>2.6x</td><td>2.7x</td></tr>
<tr><td>UnmarshalDataset/citm_catalog-2</td><td>2.1x</td><td>2.2x</td></tr> <tr><td>UnmarshalDataset/citm_catalog-2</td><td>2.2x</td><td>2.3x</td></tr>
<tr><td>UnmarshalDataset/canada-2</td><td>1.6x</td><td>1.3x</td></tr> <tr><td>UnmarshalDataset/canada-2</td><td>1.8x</td><td>1.5x</td></tr>
<tr><td>UnmarshalDataset/config-2</td><td>4.3x</td><td>3.2x</td></tr> <tr><td>UnmarshalDataset/config-2</td><td>4.1x</td><td>2.9x</td></tr>
<tr><td>[Geo mean]</td><td>2.7x</td><td>2.8x</td></tr> <tr><td>geomean</td><td>2.7x</td><td>2.8x</td></tr>
</tbody> </tbody>
</table> </table>
<p>This table can be generated with <code>./ci.sh benchmark -a -html</code>.</p> <p>This table can be generated with <code>./ci.sh benchmark -a -html</code>.</p>
</details> </details>
@@ -233,24 +293,24 @@ Go-toml provides three handy command line tools:
* `tomljson`: Reads a TOML file and outputs its JSON representation. * `tomljson`: Reads a TOML file and outputs its JSON representation.
``` ```
$ go install github.com/pelletier/go-toml/v2/cmd/tomljson@latest $ go install github.com/pelletier/go-toml/v2/cmd/tomljson@latest
$ tomljson --help $ tomljson --help
``` ```
* `jsontoml`: Reads a JSON file and outputs a TOML representation. * `jsontoml`: Reads a JSON file and outputs a TOML representation.
``` ```
$ go install github.com/pelletier/go-toml/v2/cmd/jsontoml@latest $ go install github.com/pelletier/go-toml/v2/cmd/jsontoml@latest
$ jsontoml --help $ jsontoml --help
``` ```
* `tomll`: Lints and reformats a TOML file. * `tomll`: Lints and reformats a TOML file.
``` ```
$ go install github.com/pelletier/go-toml/v2/cmd/tomll@latest $ go install github.com/pelletier/go-toml/v2/cmd/tomll@latest
$ tomll --help $ tomll --help
``` ```
### Docker image ### Docker image
@@ -261,7 +321,7 @@ Those tools are also available as a [Docker image][docker]. For example, to use
docker run -i ghcr.io/pelletier/go-toml:v2 tomljson < example.toml docker run -i ghcr.io/pelletier/go-toml:v2 tomljson < example.toml
``` ```
Multiple versions are availble on [ghcr.io][docker]. Multiple versions are available on [ghcr.io][docker].
[docker]: https://github.com/pelletier/go-toml/pkgs/container/go-toml [docker]: https://github.com/pelletier/go-toml/pkgs/container/go-toml
@@ -293,16 +353,16 @@ element in the interface to decode the object. For example:
```go ```go
type inner struct { type inner struct {
B interface{} B interface{}
} }
type doc struct { type doc struct {
A interface{} A interface{}
} }
d := doc{ d := doc{
A: inner{ A: inner{
B: "Before", B: "Before",
}, },
} }
data := ` data := `
@@ -341,7 +401,7 @@ contained in the doc is superior to the capacity of the array. For example:
```go ```go
type doc struct { type doc struct {
A [2]string A [2]string
} }
d := doc{} d := doc{}
err := toml.Unmarshal([]byte(`A = ["one", "two", "many"]`), &d) err := toml.Unmarshal([]byte(`A = ["one", "two", "many"]`), &d)
@@ -565,10 +625,11 @@ complete solutions exist out there.
## Versioning ## Versioning
Go-toml follows [Semantic Versioning](https://semver.org). The supported version Expect for parts explicitly marked otherwise, go-toml follows [Semantic
of [TOML](https://github.com/toml-lang/toml) is indicated at the beginning of Versioning](https://semver.org). The supported version of
this document. The last two major versions of Go are supported [TOML](https://github.com/toml-lang/toml) is indicated at the beginning of this
(see [Go Release Policy](https://golang.org/doc/devel/release.html#policy)). document. The last two major versions of Go are supported (see [Go Release
Policy](https://golang.org/doc/devel/release.html#policy)).
## License ## License
-3
View File
@@ -2,9 +2,6 @@
## Supported Versions ## Supported Versions
Use this section to tell people about which versions of your project are
currently being supported with security updates.
| Version | Supported | | Version | Supported |
| ---------- | ------------------ | | ---------- | ------------------ |
| Latest 2.x | :white_check_mark: | | Latest 2.x | :white_check_mark: |
+14 -14
View File
@@ -3,16 +3,16 @@ package benchmark_test
import ( import (
"compress/gzip" "compress/gzip"
"encoding/json" "encoding/json"
"io/ioutil" "io"
"os" "os"
"path/filepath" "path/filepath"
"testing" "testing"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/stretchr/testify/require" "github.com/pelletier/go-toml/v2/internal/assert"
) )
var bench_inputs = []struct { var benchInputs = []struct {
name string name string
jsonLen int jsonLen int
}{ }{
@@ -30,22 +30,22 @@ var bench_inputs = []struct {
} }
func TestUnmarshalDatasetCode(t *testing.T) { func TestUnmarshalDatasetCode(t *testing.T) {
for _, tc := range bench_inputs { for _, tc := range benchInputs {
t.Run(tc.name, func(t *testing.T) { t.Run(tc.name, func(t *testing.T) {
buf := fixture(t, tc.name) buf := fixture(t, tc.name)
var v interface{} var v interface{}
require.NoError(t, toml.Unmarshal(buf, &v)) assert.NoError(t, toml.Unmarshal(buf, &v))
b, err := json.Marshal(v) b, err := json.Marshal(v)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, len(b), tc.jsonLen) assert.Equal(t, len(b), tc.jsonLen)
}) })
} }
} }
func BenchmarkUnmarshalDataset(b *testing.B) { func BenchmarkUnmarshalDataset(b *testing.B) {
for _, tc := range bench_inputs { for _, tc := range benchInputs {
b.Run(tc.name, func(b *testing.B) { b.Run(tc.name, func(b *testing.B) {
buf := fixture(b, tc.name) buf := fixture(b, tc.name)
b.SetBytes(int64(len(buf))) b.SetBytes(int64(len(buf)))
@@ -53,7 +53,7 @@ func BenchmarkUnmarshalDataset(b *testing.B) {
b.ResetTimer() b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
var v interface{} var v interface{}
require.NoError(b, toml.Unmarshal(buf, &v)) assert.NoError(b, toml.Unmarshal(buf, &v))
} }
}) })
} }
@@ -68,13 +68,13 @@ func fixture(tb testing.TB, path string) []byte {
if os.IsNotExist(err) { if os.IsNotExist(err) {
tb.Skip("benchmark fixture not found:", file) tb.Skip("benchmark fixture not found:", file)
} }
require.NoError(tb, err) assert.NoError(tb, err)
defer f.Close() defer func() { _ = f.Close() }()
gz, err := gzip.NewReader(f) gz, err := gzip.NewReader(f)
require.NoError(tb, err) assert.NoError(tb, err)
buf, err := ioutil.ReadAll(gz) buf, err := io.ReadAll(gz)
require.NoError(tb, err) assert.NoError(tb, err)
return buf return buf
} }
+24 -24
View File
@@ -2,12 +2,12 @@ package benchmark_test
import ( import (
"bytes" "bytes"
"io/ioutil" "os"
"testing" "testing"
"time" "time"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/stretchr/testify/require" "github.com/pelletier/go-toml/v2/internal/assert"
) )
func TestUnmarshalSimple(t *testing.T) { func TestUnmarshalSimple(t *testing.T) {
@@ -18,7 +18,7 @@ func TestUnmarshalSimple(t *testing.T) {
err := toml.Unmarshal(doc, &d) err := toml.Unmarshal(doc, &d)
if err != nil { if err != nil {
panic(err) t.Error(err)
} }
} }
@@ -38,7 +38,7 @@ func BenchmarkUnmarshal(b *testing.B) {
err := toml.Unmarshal(doc, &d) err := toml.Unmarshal(doc, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
}) })
@@ -52,14 +52,14 @@ func BenchmarkUnmarshal(b *testing.B) {
d := map[string]interface{}{} d := map[string]interface{}{}
err := toml.Unmarshal(doc, &d) err := toml.Unmarshal(doc, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
}) })
}) })
b.Run("ReferenceFile", func(b *testing.B) { b.Run("ReferenceFile", func(b *testing.B) {
bytes, err := ioutil.ReadFile("benchmark.toml") bytes, err := os.ReadFile("benchmark.toml")
if err != nil { if err != nil {
b.Fatal(err) b.Fatal(err)
} }
@@ -72,7 +72,7 @@ func BenchmarkUnmarshal(b *testing.B) {
d := benchmarkDoc{} d := benchmarkDoc{}
err := toml.Unmarshal(bytes, &d) err := toml.Unmarshal(bytes, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
}) })
@@ -85,7 +85,7 @@ func BenchmarkUnmarshal(b *testing.B) {
d := map[string]interface{}{} d := map[string]interface{}{}
err := toml.Unmarshal(bytes, &d) err := toml.Unmarshal(bytes, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
}) })
@@ -99,7 +99,7 @@ func BenchmarkUnmarshal(b *testing.B) {
d := map[string]interface{}{} d := map[string]interface{}{}
err := toml.Unmarshal(hugoFrontMatterbytes, &d) err := toml.Unmarshal(hugoFrontMatterbytes, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
}) })
@@ -123,7 +123,7 @@ func BenchmarkMarshal(b *testing.B) {
err := toml.Unmarshal(doc, &d) err := toml.Unmarshal(doc, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
b.ReportAllocs() b.ReportAllocs()
@@ -134,7 +134,7 @@ func BenchmarkMarshal(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
out, err = marshal(d) out, err = marshal(d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
@@ -145,7 +145,7 @@ func BenchmarkMarshal(b *testing.B) {
d := map[string]interface{}{} d := map[string]interface{}{}
err := toml.Unmarshal(doc, &d) err := toml.Unmarshal(doc, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
b.ReportAllocs() b.ReportAllocs()
@@ -156,7 +156,7 @@ func BenchmarkMarshal(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
out, err = marshal(d) out, err = marshal(d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
@@ -165,7 +165,7 @@ func BenchmarkMarshal(b *testing.B) {
}) })
b.Run("ReferenceFile", func(b *testing.B) { b.Run("ReferenceFile", func(b *testing.B) {
bytes, err := ioutil.ReadFile("benchmark.toml") bytes, err := os.ReadFile("benchmark.toml")
if err != nil { if err != nil {
b.Fatal(err) b.Fatal(err)
} }
@@ -174,7 +174,7 @@ func BenchmarkMarshal(b *testing.B) {
d := benchmarkDoc{} d := benchmarkDoc{}
err := toml.Unmarshal(bytes, &d) err := toml.Unmarshal(bytes, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
b.ReportAllocs() b.ReportAllocs()
b.ResetTimer() b.ResetTimer()
@@ -184,7 +184,7 @@ func BenchmarkMarshal(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
out, err = marshal(d) out, err = marshal(d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
@@ -195,7 +195,7 @@ func BenchmarkMarshal(b *testing.B) {
d := map[string]interface{}{} d := map[string]interface{}{}
err := toml.Unmarshal(bytes, &d) err := toml.Unmarshal(bytes, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
b.ReportAllocs() b.ReportAllocs()
@@ -205,7 +205,7 @@ func BenchmarkMarshal(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
out, err = marshal(d) out, err = marshal(d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
@@ -217,7 +217,7 @@ func BenchmarkMarshal(b *testing.B) {
d := map[string]interface{}{} d := map[string]interface{}{}
err := toml.Unmarshal(hugoFrontMatterbytes, &d) err := toml.Unmarshal(hugoFrontMatterbytes, &d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
b.ReportAllocs() b.ReportAllocs()
@@ -228,7 +228,7 @@ func BenchmarkMarshal(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
out, err = marshal(d) out, err = marshal(d)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
@@ -344,11 +344,11 @@ type benchmarkDoc struct {
} }
func TestUnmarshalReferenceFile(t *testing.T) { func TestUnmarshalReferenceFile(t *testing.T) {
bytes, err := ioutil.ReadFile("benchmark.toml") bytes, err := os.ReadFile("benchmark.toml")
require.NoError(t, err) assert.NoError(t, err)
d := benchmarkDoc{} d := benchmarkDoc{}
err = toml.Unmarshal(bytes, &d) err = toml.Unmarshal(bytes, &d)
require.NoError(t, err) assert.NoError(t, err)
expected := benchmarkDoc{ expected := benchmarkDoc{
Table: struct { Table: struct {
@@ -627,7 +627,7 @@ trimmed in raw strings.
}, },
} }
require.Equal(t, expected, d) assert.Equal(t, expected, d)
} }
var hugoFrontMatterbytes = []byte(` var hugoFrontMatterbytes = []byte(`
+13 -9
View File
@@ -77,7 +77,7 @@ cover() {
pushd "$dir" pushd "$dir"
go test -covermode=atomic -coverpkg=./... -coverprofile=coverage.out.tmp ./... go test -covermode=atomic -coverpkg=./... -coverprofile=coverage.out.tmp ./...
cat coverage.out.tmp | grep -v fuzz | grep -v testsuite | grep -v tomltestgen | grep -v gotoml-test-decoder > coverage.out grep -Ev '(fuzz|testsuite|tomltestgen|gotoml-test-decoder|gotoml-test-encoder)' coverage.out.tmp > coverage.out
go tool cover -func=coverage.out go tool cover -func=coverage.out
echo "Coverage profile for ${branch}: ${dir}/coverage.out" >&2 echo "Coverage profile for ${branch}: ${dir}/coverage.out" >&2
popd popd
@@ -152,7 +152,7 @@ bench() {
fi fi
export GOMAXPROCS=2 export GOMAXPROCS=2
nice -n -19 taskset --cpu-list 0,1 go test '-bench=^Benchmark(Un)?[mM]arshal' -count=5 -run=Nothing ./... | tee "${out}" go test '-bench=^Benchmark(Un)?[mM]arshal' -count=10 -run=Nothing ./... | tee "${out}"
popd popd
if [ "${branch}" != "HEAD" ]; then if [ "${branch}" != "HEAD" ]; then
@@ -161,10 +161,12 @@ bench() {
} }
fmktemp() { fmktemp() {
if mktemp --version|grep GNU >/dev/null; then if mktemp --version &> /dev/null; then
mktemp --suffix=-$1; # GNU
mktemp --suffix=-$1
else else
mktemp -t $1; # BSD
mktemp -t $1
fi fi
} }
@@ -184,12 +186,14 @@ with open(sys.argv[1]) as f:
lines.append(line.split(',')) lines.append(line.split(','))
results = [] results = []
for line in reversed(lines[1:]): for line in reversed(lines[2:]):
if len(line) < 8 or line[0] == "":
continue
v2 = float(line[1]) v2 = float(line[1])
results.append([ results.append([
line[0].replace("-32", ""), line[0].replace("-32", ""),
"%.1fx" % (float(line[3])/v2), # v1 "%.1fx" % (float(line[3])/v2), # v1
"%.1fx" % (float(line[5])/v2), # bs "%.1fx" % (float(line[7])/v2), # bs
]) ])
# move geomean to the end # move geomean to the end
results.append(results[0]) results.append(results[0])
@@ -260,10 +264,10 @@ benchmark() {
if [ "$1" = "-html" ]; then if [ "$1" = "-html" ]; then
tmpcsv=`fmktemp csv` tmpcsv=`fmktemp csv`
benchstat -csv -geomean go-toml-v2.txt go-toml-v1.txt bs-toml.txt > $tmpcsv benchstat -format csv go-toml-v2.txt go-toml-v1.txt bs-toml.txt > $tmpcsv
benchstathtml $tmpcsv benchstathtml $tmpcsv
else else
benchstat -geomean go-toml-v2.txt go-toml-v1.txt bs-toml.txt benchstat go-toml-v2.txt go-toml-v1.txt bs-toml.txt
fi fi
rm -f go-toml-v2.txt go-toml-v1.txt bs-toml.txt rm -f go-toml-v2.txt go-toml-v1.txt bs-toml.txt
+1
View File
@@ -1,3 +1,4 @@
// Package gotoml-test-decoder is a minimal decoder program used to compare this library with other TOML implementations.
package main package main
import ( import (
+31
View File
@@ -0,0 +1,31 @@
// Package gotoml-test-encoder is a minimal encoder program used to compare this library with other TOML implementations.
package main
import (
"flag"
"log"
"os"
"path"
"github.com/pelletier/go-toml/v2/internal/testsuite"
)
func main() {
log.SetFlags(0)
flag.Usage = usage
flag.Parse()
if flag.NArg() != 0 {
flag.Usage()
}
err := testsuite.EncodeStdin()
if err != nil {
log.Fatal(err)
}
}
func usage() {
log.Printf("Usage: %s < json-file\n", path.Base(os.Args[0]))
flag.PrintDefaults()
os.Exit(1)
}
+12 -1
View File
@@ -19,6 +19,7 @@ package main
import ( import (
"encoding/json" "encoding/json"
"flag"
"io" "io"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
@@ -33,7 +34,11 @@ Reading from a file:
jsontoml file.json > file.toml jsontoml file.json > file.toml
` `
var useJSONNumber bool
func main() { func main() {
flag.BoolVar(&useJSONNumber, "use-json-number", false, "unmarshal numbers into `json.Number` type instead of as `float64`")
p := cli.Program{ p := cli.Program{
Usage: usage, Usage: usage,
Fn: convert, Fn: convert,
@@ -45,11 +50,17 @@ func convert(r io.Reader, w io.Writer) error {
var v interface{} var v interface{}
d := json.NewDecoder(r) d := json.NewDecoder(r)
e := toml.NewEncoder(w)
if useJSONNumber {
d.UseNumber()
e.SetMarshalJSONNumbers(true)
}
err := d.Decode(&v) err := d.Decode(&v)
if err != nil { if err != nil {
return err return err
} }
e := toml.NewEncoder(w)
return e.Encode(v) return e.Encode(v)
} }
+21 -7
View File
@@ -5,16 +5,16 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/pelletier/go-toml/v2/internal/assert"
"github.com/stretchr/testify/require"
) )
func TestConvert(t *testing.T) { func TestConvert(t *testing.T) {
examples := []struct { examples := []struct {
name string name string
input string input string
expected string expected string
errors bool errors bool
useJSONNumber bool
}{ }{
{ {
name: "valid json", name: "valid json",
@@ -26,6 +26,19 @@ func TestConvert(t *testing.T) {
}`, }`,
expected: `[mytoml] expected: `[mytoml]
a = 42.0 a = 42.0
`,
},
{
name: "use json number",
useJSONNumber: true,
input: `
{
"mytoml": {
"a": 42
}
}`,
expected: `[mytoml]
a = 42
`, `,
}, },
{ {
@@ -37,9 +50,10 @@ a = 42.0
for _, e := range examples { for _, e := range examples {
b := new(bytes.Buffer) b := new(bytes.Buffer)
useJSONNumber = e.useJSONNumber
err := convert(strings.NewReader(e.input), b) err := convert(strings.NewReader(e.input), b)
if e.errors { if e.errors {
require.Error(t, err) assert.Error(t, err)
} else { } else {
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, e.expected, b.String()) assert.Equal(t, e.expected, b.String())
+4 -5
View File
@@ -2,13 +2,12 @@ package main
import ( import (
"bytes" "bytes"
"fmt" "errors"
"io" "io"
"strings" "strings"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/pelletier/go-toml/v2/internal/assert"
"github.com/stretchr/testify/require"
) )
func TestConvert(t *testing.T) { func TestConvert(t *testing.T) {
@@ -46,7 +45,7 @@ a = 42`),
b := new(bytes.Buffer) b := new(bytes.Buffer)
err := convert(e.input, b) err := convert(e.input, b)
if e.errors { if e.errors {
require.Error(t, err) assert.Error(t, err)
} else { } else {
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, e.expected, b.String()) assert.Equal(t, e.expected, b.String())
@@ -57,5 +56,5 @@ a = 42`),
type badReader struct{} type badReader struct{}
func (r *badReader) Read([]byte) (int, error) { func (r *badReader) Read([]byte) (int, error) {
return 0, fmt.Errorf("reader failed on purpose") return 0, errors.New("reader failed on purpose")
} }
+2 -3
View File
@@ -5,8 +5,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/pelletier/go-toml/v2/internal/assert"
"github.com/stretchr/testify/require"
) )
func TestConvert(t *testing.T) { func TestConvert(t *testing.T) {
@@ -36,7 +35,7 @@ a = 42.0
b := new(bytes.Buffer) b := new(bytes.Buffer)
err := convert(strings.NewReader(e.input), b) err := convert(strings.NewReader(e.input), b)
if e.errors { if e.errors {
require.Error(t, err) assert.Error(t, err)
} else { } else {
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, e.expected, b.String()) assert.Equal(t, e.expected, b.String())
+61 -101
View File
@@ -7,21 +7,18 @@
package main package main
import ( import (
"archive/zip"
"bytes" "bytes"
"flag" "flag"
"fmt" "fmt"
"go/format" "go/format"
"io"
"io/ioutil"
"log" "log"
"net/http"
"os" "os"
"regexp" "path/filepath"
"strconv" "strconv"
"strings" "strings"
"text/template" "text/template"
"time" "time"
"unicode"
) )
type invalid struct { type invalid struct {
@@ -32,7 +29,7 @@ type invalid struct {
type valid struct { type valid struct {
Name string Name string
Input string Input string
JsonRef string JSONRef string
} }
type testsCollection struct { type testsCollection struct {
@@ -43,12 +40,11 @@ type testsCollection struct {
Count int Count int
} }
const srcTemplate = "// Generated by tomltestgen for toml-test ref {{.Ref}} on {{.Timestamp}}\n" + const srcTemplate = "// Code generated by tomltestgen for toml-test ref {{.Ref}} on {{.Timestamp}}. DO NOT EDIT.\n" +
"package toml_test\n" + "package toml_test\n" +
" import (\n" + " import (\n" +
" \"testing\"\n" + " \"testing\"\n" +
")\n" + ")\n" +
"{{range .Invalid}}\n" + "{{range .Invalid}}\n" +
"func TestTOMLTest_Invalid_{{.Name}}(t *testing.T) {\n" + "func TestTOMLTest_Invalid_{{.Name}}(t *testing.T) {\n" +
" input := {{.Input|gostr}}\n" + " input := {{.Input|gostr}}\n" +
@@ -59,65 +55,31 @@ const srcTemplate = "// Generated by tomltestgen for toml-test ref {{.Ref}} on {
"{{range .Valid}}\n" + "{{range .Valid}}\n" +
"func TestTOMLTest_Valid_{{.Name}}(t *testing.T) {\n" + "func TestTOMLTest_Valid_{{.Name}}(t *testing.T) {\n" +
" input := {{.Input|gostr}}\n" + " input := {{.Input|gostr}}\n" +
" jsonRef := {{.JsonRef|gostr}}\n" + " jsonRef := {{.JSONRef|gostr}}\n" +
" testgenValid(t, input, jsonRef)\n" + " testgenValid(t, input, jsonRef)\n" +
"}\n" + "}\n" +
"{{end}}\n" "{{end}}\n"
func downloadTmpFile(url string) string {
log.Println("starting to download file from", url)
resp, err := http.Get(url)
if err != nil {
panic(err)
}
defer resp.Body.Close()
tmpfile, err := ioutil.TempFile("", "toml-test-*.zip")
if err != nil {
panic(err)
}
defer tmpfile.Close()
copiedLen, err := io.Copy(tmpfile, resp.Body)
if err != nil {
panic(err)
}
if resp.ContentLength > 0 && copiedLen != resp.ContentLength {
panic(fmt.Errorf("copied %d bytes, request body had %d", copiedLen, resp.ContentLength))
}
return tmpfile.Name()
}
func kebabToCamel(kebab string) string { func kebabToCamel(kebab string) string {
camel := "" var buf strings.Builder
nextUpper := true nextUpper := true
for _, c := range kebab { for _, c := range kebab {
if nextUpper { if nextUpper {
camel += strings.ToUpper(string(c)) buf.WriteRune(unicode.ToUpper(c))
nextUpper = false nextUpper = false
} else if c == '-' {
nextUpper = true
} else if c == '/' {
nextUpper = true
camel += "_"
} else { } else {
camel += string(c) switch c {
case '-':
nextUpper = true
case '/':
nextUpper = true
buf.WriteByte('_')
default:
buf.WriteRune(c)
}
} }
} }
return camel return buf.String()
}
func readFileFromZip(f *zip.File) string {
reader, err := f.Open()
if err != nil {
panic(err)
}
defer reader.Close()
bytes, err := ioutil.ReadAll(reader)
if err != nil {
panic(err)
}
return string(bytes)
} }
func templateGoStr(input string) string { func templateGoStr(input string) string {
@@ -138,61 +100,59 @@ func main() {
flag.Usage = usage flag.Usage = usage
flag.Parse() flag.Parse()
url := "https://codeload.github.com/BurntSushi/toml-test/zip/" + *ref
resultFile := downloadTmpFile(url)
defer os.Remove(resultFile)
log.Println("file written to", resultFile)
zipReader, err := zip.OpenReader(resultFile)
if err != nil {
panic(err)
}
defer zipReader.Close()
collection := testsCollection{ collection := testsCollection{
Ref: *ref, Ref: *ref,
Timestamp: time.Now().Format(time.RFC3339), Timestamp: time.Now().Format(time.RFC3339),
} }
zipFilesMap := map[string]*zip.File{} dirContent, _ := filepath.Glob("tests/invalid/**/*.toml")
for _, f := range dirContent {
filename := strings.TrimPrefix(f, "tests/valid/")
name := kebabToCamel(strings.TrimSuffix(filename, ".toml"))
name = strings.ReplaceAll(name, ".", "_")
for _, f := range zipReader.File { log.Printf("> [%s] %s\n", "invalid", name)
zipFilesMap[f.Name] = f
tomlContent, err := os.ReadFile(f) // #nosec G304
if err != nil {
fmt.Printf("failed to read test file: %s\n", err)
os.Exit(1)
}
collection.Invalid = append(collection.Invalid, invalid{
Name: name,
Input: string(tomlContent),
})
collection.Count++
} }
testFileRegexp := regexp.MustCompile(`([^/]+/tests/(valid|invalid)/(.+))\.(toml)`) dirContent, _ = filepath.Glob("tests/valid/**/*.toml")
for _, f := range zipReader.File { for _, f := range dirContent {
groups := testFileRegexp.FindStringSubmatch(f.Name) filename := strings.TrimPrefix(f, "tests/valid/")
if len(groups) > 0 { name := kebabToCamel(strings.TrimSuffix(filename, ".toml"))
name := kebabToCamel(groups[3]) name = strings.ReplaceAll(name, ".", "_")
testType := groups[2]
log.Printf("> [%s] %s\n", testType, name) log.Printf("> [%s] %s\n", "valid", name)
tomlContent := readFileFromZip(f) tomlContent, err := os.ReadFile(f) // #nosec G304
if err != nil {
switch testType { fmt.Printf("failed reading test file: %s\n", err)
case "invalid": os.Exit(1)
collection.Invalid = append(collection.Invalid, invalid{
Name: name,
Input: tomlContent,
})
collection.Count++
case "valid":
baseFilePath := groups[1]
jsonFilePath := baseFilePath + ".json"
jsonContent := readFileFromZip(zipFilesMap[jsonFilePath])
collection.Valid = append(collection.Valid, valid{
Name: name,
Input: tomlContent,
JsonRef: jsonContent,
})
collection.Count++
default:
panic(fmt.Sprintf("unknown test type: %s", testType))
}
} }
filename = strings.TrimSuffix(f, ".toml")
jsonContent, err := os.ReadFile(filename + ".json") // #nosec G304
if err != nil {
fmt.Printf("failed reading validation json: %s\n", err)
os.Exit(1)
}
collection.Valid = append(collection.Valid, valid{
Name: name,
Input: string(tomlContent),
JSONRef: string(jsonContent),
})
collection.Count++
} }
log.Printf("Collected %d tests from toml-test\n", collection.Count) log.Printf("Collected %d tests from toml-test\n", collection.Count)
@@ -202,7 +162,7 @@ func main() {
} }
t := template.Must(template.New("src").Funcs(funcMap).Parse(srcTemplate)) t := template.Must(template.New("src").Funcs(funcMap).Parse(srcTemplate))
buf := new(bytes.Buffer) buf := new(bytes.Buffer)
err = t.Execute(buf, collection) err := t.Execute(buf, collection)
if err != nil { if err != nil {
panic(err) panic(err)
} }
@@ -216,7 +176,7 @@ func main() {
return return
} }
err = os.WriteFile(*out, outputBytes, 0644) err = os.WriteFile(*out, outputBytes, 0o600)
if err != nil { if err != nil {
panic(err) panic(err)
} }
+2 -3
View File
@@ -230,8 +230,8 @@ func parseLocalTime(b []byte) (LocalTime, []byte, error) {
return t, nil, err return t, nil, err
} }
if t.Second > 60 { if t.Second > 59 {
return t, nil, unstable.NewParserError(b[6:8], "seconds cannot be greater 60") return t, nil, unstable.NewParserError(b[6:8], "seconds cannot be greater than 59")
} }
b = b[8:] b = b[8:]
@@ -279,7 +279,6 @@ func parseLocalTime(b []byte) (LocalTime, []byte, error) {
return t, b, nil return t, b, nil
} }
//nolint:cyclop
func parseFloat(b []byte) (float64, error) { func parseFloat(b []byte) (float64, error) {
if len(b) == 4 && (b[0] == '+' || b[0] == '-') && b[1] == 'n' && b[2] == 'a' && b[3] == 'n' { if len(b) == 4 && (b[0] == '+' || b[0] == '-') && b[1] == 'n' && b[2] == 'a' && b[3] == 'n' {
return math.NaN(), nil return math.NaN(), nil
+35 -4
View File
@@ -2,10 +2,10 @@ package toml
import ( import (
"fmt" "fmt"
"reflect"
"strconv" "strconv"
"strings" "strings"
"github.com/pelletier/go-toml/v2/internal/danger"
"github.com/pelletier/go-toml/v2/unstable" "github.com/pelletier/go-toml/v2/unstable"
) )
@@ -54,6 +54,18 @@ func (s *StrictMissingError) String() string {
return buf.String() return buf.String()
} }
// Unwrap returns wrapped decode errors
//
// Implements errors.Join() interface.
func (s *StrictMissingError) Unwrap() []error {
errs := make([]error, len(s.Errors))
for i := range s.Errors {
errs[i] = &s.Errors[i]
}
return errs
}
// Key represents a TOML key as a sequence of key parts.
type Key []string type Key []string
// Error returns the error message contained in the DecodeError. // Error returns the error message contained in the DecodeError.
@@ -78,7 +90,7 @@ func (e *DecodeError) Key() Key {
return e.key return e.key
} }
// decodeErrorFromHighlight creates a DecodeError referencing a highlighted // wrapDecodeError creates a DecodeError referencing a highlighted
// range of bytes from document. // range of bytes from document.
// //
// highlight needs to be a sub-slice of document, or this function panics. // highlight needs to be a sub-slice of document, or this function panics.
@@ -88,7 +100,7 @@ func (e *DecodeError) Key() Key {
// //
//nolint:funlen //nolint:funlen
func wrapDecodeError(document []byte, de *unstable.ParserError) *DecodeError { func wrapDecodeError(document []byte, de *unstable.ParserError) *DecodeError {
offset := danger.SubsliceOffset(document, de.Highlight) offset := subsliceOffset(document, de.Highlight)
errMessage := de.Error() errMessage := de.Error()
errLine, errColumn := positionAtEnd(document[:offset]) errLine, errColumn := positionAtEnd(document[:offset])
@@ -248,5 +260,24 @@ func positionAtEnd(b []byte) (row int, column int) {
} }
} }
return return row, column
}
// subsliceOffset returns the byte offset of subslice within data.
// subslice must share the same backing array as data.
func subsliceOffset(data []byte, subslice []byte) int {
if len(subslice) == 0 {
return 0
}
// Use reflect to get the data pointers of both slices.
// This is safe because we're only reading the pointer values for comparison.
dataPtr := reflect.ValueOf(data).Pointer()
subPtr := reflect.ValueOf(subslice).Pointer()
offset := int(subPtr - dataPtr)
if offset < 0 || offset > len(data) {
panic("subslice is not within data")
}
return offset
} }
+97 -7
View File
@@ -7,13 +7,12 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/pelletier/go-toml/v2/internal/assert"
"github.com/pelletier/go-toml/v2/unstable" "github.com/pelletier/go-toml/v2/unstable"
"github.com/stretchr/testify/assert"
) )
//nolint:funlen //nolint:funlen
func TestDecodeError(t *testing.T) { func TestDecodeError(t *testing.T) {
examples := []struct { examples := []struct {
desc string desc string
doc [3]string doc [3]string
@@ -161,13 +160,12 @@ line 5`,
for _, e := range examples { for _, e := range examples {
e := e e := e
t.Run(e.desc, func(t *testing.T) { t.Run(e.desc, func(t *testing.T) {
b := bytes.Buffer{} b := bytes.Buffer{}
b.Write([]byte(e.doc[0])) b.WriteString(e.doc[0])
start := b.Len() start := b.Len()
b.Write([]byte(e.doc[1])) b.WriteString(e.doc[1])
end := b.Len() end := b.Len()
b.Write([]byte(e.doc[2])) b.WriteString(e.doc[2])
doc := b.Bytes() doc := b.Bytes()
hl := doc[start:end] hl := doc[start:end]
@@ -189,7 +187,6 @@ line 5`,
} }
func TestDecodeError_Accessors(t *testing.T) { func TestDecodeError_Accessors(t *testing.T) {
e := DecodeError{ e := DecodeError{
message: "foo", message: "foo",
line: 1, line: 1,
@@ -205,6 +202,99 @@ func TestDecodeError_Accessors(t *testing.T) {
assert.Equal(t, "bar", e.String()) assert.Equal(t, "bar", e.String())
} }
func TestDecodeError_DuplicateContent(t *testing.T) {
// This test verifies that when the same content appears multiple times
// in the document, the error correctly points to the actual location
// of the error, not the first occurrence of the content.
//
// The document has "1__2" on line 1 and "3__4" on line 2.
// Both have "__" which is invalid, but we want to ensure errors
// on line 2 report line 2, not line 1.
doc := `a = 1
b = 3__4`
var v map[string]int
err := Unmarshal([]byte(doc), &v)
var derr *DecodeError
if !errors.As(err, &derr) {
t.Fatal("error not in expected format")
}
row, col := derr.Position()
// The error should be on line 2 where "3__4" is
if row != 2 {
t.Errorf("expected error on row 2, got row %d", row)
}
// Column should point to the "__" part (after "3")
if col < 5 {
t.Errorf("expected error at column >= 5, got column %d", col)
}
}
func TestDecodeError_Position(t *testing.T) {
// Test that error positions are correctly reported for various error locations
examples := []struct {
name string
doc string
expectedRow int
minCol int
}{
{
name: "error on first line",
doc: `a = 1__2`,
expectedRow: 1,
minCol: 5,
},
{
name: "error on second line",
doc: "a = 1\nb = 2__3",
expectedRow: 2,
minCol: 5,
},
{
name: "error on third line",
doc: "a = 1\nb = 2\nc = 3__4",
expectedRow: 3,
minCol: 5,
},
}
for _, e := range examples {
t.Run(e.name, func(t *testing.T) {
var v map[string]int
err := Unmarshal([]byte(e.doc), &v)
var derr *DecodeError
if !errors.As(err, &derr) {
t.Fatal("error not in expected format")
}
row, col := derr.Position()
assert.Equal(t, e.expectedRow, row)
if col < e.minCol {
t.Errorf("expected column >= %d, got %d", e.minCol, col)
}
})
}
}
func TestStrictErrorUnwrap(t *testing.T) {
fo := bytes.NewBufferString(`
Missing = 1
OtherMissing = 1
`)
var out struct{}
err := NewDecoder(fo).DisallowUnknownFields().Decode(&out)
assert.Error(t, err)
strictErr := &StrictMissingError{}
assert.True(t, errors.As(err, &strictErr))
assert.Equal(t, 2, len(strictErr.Unwrap()))
}
func ExampleDecodeError() { func ExampleDecodeError() {
doc := `name = 123__456` doc := `name = 123__456`
+15 -15
View File
@@ -4,28 +4,28 @@ import (
"testing" "testing"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/stretchr/testify/require" "github.com/pelletier/go-toml/v2/internal/assert"
) )
func TestFastSimpleInt(t *testing.T) { func TestFastSimpleInt(t *testing.T) {
m := map[string]int64{} m := map[string]int64{}
err := toml.Unmarshal([]byte(`a = 42`), &m) err := toml.Unmarshal([]byte(`a = 42`), &m)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, map[string]int64{"a": 42}, m) assert.Equal(t, map[string]int64{"a": 42}, m)
} }
func TestFastSimpleFloat(t *testing.T) { func TestFastSimpleFloat(t *testing.T) {
m := map[string]float64{} m := map[string]float64{}
err := toml.Unmarshal([]byte("a = 42\nb = 1.1\nc = 12341234123412341234123412341234"), &m) err := toml.Unmarshal([]byte("a = 42\nb = 1.1\nc = 12341234123412341234123412341234"), &m)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, map[string]float64{"a": 42, "b": 1.1, "c": 1.2341234123412342e+31}, m) assert.Equal(t, map[string]float64{"a": 42, "b": 1.1, "c": 1.2341234123412342e+31}, m)
} }
func TestFastSimpleString(t *testing.T) { func TestFastSimpleString(t *testing.T) {
m := map[string]string{} m := map[string]string{}
err := toml.Unmarshal([]byte(`a = "hello"`), &m) err := toml.Unmarshal([]byte(`a = "hello"`), &m)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, map[string]string{"a": "hello"}, m) assert.Equal(t, map[string]string{"a": "hello"}, m)
} }
func TestFastSimpleInterface(t *testing.T) { func TestFastSimpleInterface(t *testing.T) {
@@ -33,8 +33,8 @@ func TestFastSimpleInterface(t *testing.T) {
err := toml.Unmarshal([]byte(` err := toml.Unmarshal([]byte(`
a = "hello" a = "hello"
b = 42`), &m) b = 42`), &m)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, map[string]interface{}{ assert.Equal(t, map[string]interface{}{
"a": "hello", "a": "hello",
"b": int64(42), "b": int64(42),
}, m) }, m)
@@ -46,8 +46,8 @@ func TestFastMultipartKeyInterface(t *testing.T) {
a.interim = "test" a.interim = "test"
a.b.c = "hello" a.b.c = "hello"
b = 42`), &m) b = 42`), &m)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, map[string]interface{}{ assert.Equal(t, map[string]interface{}{
"a": map[string]interface{}{ "a": map[string]interface{}{
"interim": "test", "interim": "test",
"b": map[string]interface{}{ "b": map[string]interface{}{
@@ -66,8 +66,8 @@ func TestFastExistingMap(t *testing.T) {
ints.one = 1 ints.one = 1
ints.two = 2 ints.two = 2
strings.yo = "hello"`), &m) strings.yo = "hello"`), &m)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, map[string]interface{}{ assert.Equal(t, map[string]interface{}{
"ints": map[string]interface{}{ "ints": map[string]interface{}{
"one": int64(1), "one": int64(1),
"two": int64(2), "two": int64(2),
@@ -90,9 +90,9 @@ func TestFastArrayTable(t *testing.T) {
m := map[string]interface{}{} m := map[string]interface{}{}
err := toml.Unmarshal(b, &m) err := toml.Unmarshal(b, &m)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, map[string]interface{}{ assert.Equal(t, map[string]interface{}{
"root": map[string]interface{}{ "root": map[string]interface{}{
"nested": []interface{}{ "nested": []interface{}{
map[string]interface{}{ map[string]interface{}{
+5 -8
View File
@@ -1,21 +1,18 @@
//go:build go1.18 || go1.19 || go1.20 || go1.21
// +build go1.18 go1.19 go1.20 go1.21
package toml_test package toml_test
import ( import (
"io/ioutil" "os"
"strings" "strings"
"testing" "testing"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/stretchr/testify/require" "github.com/pelletier/go-toml/v2/internal/assert"
) )
func FuzzUnmarshal(f *testing.F) { func FuzzUnmarshal(f *testing.F) {
file, err := ioutil.ReadFile("benchmark/benchmark.toml") file, err := os.ReadFile("benchmark/benchmark.toml")
if err != nil { if err != nil {
panic(err) f.Error(err)
} }
f.Add(file) f.Add(file)
@@ -51,6 +48,6 @@ func FuzzUnmarshal(f *testing.F) {
if err != nil { if err != nil {
t.Fatalf("failed round trip: %s", err) t.Fatalf("failed round trip: %s", err)
} }
require.Equal(t, v, v2) assert.Equal(t, v, v2)
}) })
} }
+1 -3
View File
@@ -1,5 +1,3 @@
module github.com/pelletier/go-toml/v2 module github.com/pelletier/go-toml/v2
go 1.16 go 1.21.0
require github.com/stretchr/testify v1.8.4
-17
View File
@@ -1,17 +0,0 @@
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+141
View File
@@ -0,0 +1,141 @@
// Package assert provides assertion functions for unit testing.
package assert
import (
"bytes"
"fmt"
"reflect"
"strings"
"testing"
)
// True asserts that an expression is true.
func True(tb testing.TB, ok bool, msgAndArgs ...any) {
tb.Helper()
if ok {
return
}
tb.Fatal(formatMsgAndArgs("Expected expression to be true", msgAndArgs...))
}
// False asserts that an expression is false.
func False(tb testing.TB, ok bool, msgAndArgs ...any) {
tb.Helper()
if !ok {
return
}
tb.Fatal(formatMsgAndArgs("Expected expression to be false", msgAndArgs...))
}
// Equal asserts that "expected" and "actual" are equal.
func Equal[T any](tb testing.TB, expected, actual T, msgAndArgs ...any) {
tb.Helper()
if objectsAreEqual(expected, actual) {
return
}
msg := formatMsgAndArgs("Expected values to be equal:", msgAndArgs...)
tb.Fatalf("%s\n%s", msg, diff(expected, actual))
}
// Error asserts that an error is not nil.
func Error(tb testing.TB, err error, msgAndArgs ...any) {
tb.Helper()
if err != nil {
return
}
tb.Fatal(formatMsgAndArgs("Expected an error", msgAndArgs...))
}
// NoError asserts that an error is nil.
func NoError(tb testing.TB, err error, msgAndArgs ...any) {
tb.Helper()
if err == nil {
return
}
msg := formatMsgAndArgs("Unexpected error:", msgAndArgs...)
tb.Fatalf("%s\n%+v", msg, err)
}
// Panics asserts that the given function panics.
func Panics(tb testing.TB, fn func(), msgAndArgs ...any) {
tb.Helper()
defer func() {
if recover() == nil {
msg := formatMsgAndArgs("Expected function to panic", msgAndArgs...)
tb.Fatal(msg)
}
}()
fn()
}
// Zero asserts that a value is its zero value.
func Zero[T any](tb testing.TB, value T, msgAndArgs ...any) {
tb.Helper()
var zero T
if objectsAreEqual(value, zero) {
return
}
val := reflect.ValueOf(value)
if (val.Kind() == reflect.Slice || val.Kind() == reflect.Map || val.Kind() == reflect.Array) && val.Len() == 0 {
return
}
msg := formatMsgAndArgs("Expected zero value but got:", msgAndArgs...)
tb.Fatalf("%s\n%v", msg, value)
}
func NotZero[T any](tb testing.TB, value T, msgAndArgs ...any) {
tb.Helper()
var zero T
if !objectsAreEqual(value, zero) {
val := reflect.ValueOf(value)
switch val.Kind() {
case reflect.Slice, reflect.Map, reflect.Array:
if val.Len() > 0 {
return
}
default:
return
}
}
msg := formatMsgAndArgs("Unexpected zero value:", msgAndArgs...)
tb.Fatalf("%s\n%v", msg, value)
}
func formatMsgAndArgs(msg string, args ...any) string {
if len(args) == 0 {
return msg
}
format, ok := args[0].(string)
if !ok {
panic("message argument must be a fmt string")
}
return fmt.Sprintf(format, args[1:]...)
}
func diff(expected, actual any) string {
lines := []string{
"expected:",
fmt.Sprintf("%v", expected),
"actual:",
fmt.Sprintf("%v", actual),
}
return strings.Join(lines, "\n")
}
func objectsAreEqual(expected, actual any) bool {
if expected == nil || actual == nil {
return expected == actual
}
if exp, eok := expected.([]byte); eok {
if act, aok := actual.([]byte); aok {
return bytes.Equal(exp, act)
}
}
if exp, eok := expected.(string); eok {
if act, aok := actual.(string); aok {
return exp == act
}
}
return reflect.DeepEqual(expected, actual)
}
+217
View File
@@ -0,0 +1,217 @@
package assert
import (
"errors"
"fmt"
"testing"
)
type Data struct {
Label string
Value int64
}
func TestBadMessage(t *testing.T) {
invalidMessage := func() { True(t, false, 1234) }
assertOk(t, "Non-fmt message value", func(tb testing.TB) {
tb.Helper()
Panics(tb, invalidMessage)
})
assertFail(t, "Non-fmt message value", func(tb testing.TB) {
tb.Helper()
True(tb, false, "example %s", "message")
})
}
func TestTrue(t *testing.T) {
assertOk(t, "Succeed", func(tb testing.TB) {
tb.Helper()
True(tb, 1 > 0)
})
assertFail(t, "Fail", func(tb testing.TB) {
tb.Helper()
True(tb, 1 < 0)
})
}
func TestFalse(t *testing.T) {
assertOk(t, "Succeed", func(tb testing.TB) {
tb.Helper()
False(tb, 1 < 0)
})
assertFail(t, "Fail", func(tb testing.TB) {
tb.Helper()
False(tb, 1 > 0)
})
}
func TestEqual(t *testing.T) {
assertOk(t, "Nil", func(tb testing.TB) {
tb.Helper()
Equal(tb, interface{}(nil), interface{}(nil))
})
assertOk(t, "Identical structs", func(tb testing.TB) {
tb.Helper()
Equal(tb, Data{"expected", 1234}, Data{"expected", 1234})
})
assertFail(t, "Different structs", func(tb testing.TB) {
tb.Helper()
Equal(tb, Data{"expected", 1234}, Data{"actual", 1234})
})
assertOk(t, "Identical numbers", func(tb testing.TB) {
tb.Helper()
Equal(tb, 1234, 1234)
})
assertFail(t, "Identical numbers", func(tb testing.TB) {
tb.Helper()
Equal(tb, 1234, 1324)
})
assertOk(t, "Zero-length byte arrays", func(tb testing.TB) {
tb.Helper()
Equal(tb, []byte(nil), []byte(""))
})
assertOk(t, "Identical byte arrays", func(tb testing.TB) {
tb.Helper()
Equal(tb, []byte{1, 2, 3, 4}, []byte{1, 2, 3, 4})
})
assertFail(t, "Different byte arrays", func(tb testing.TB) {
tb.Helper()
Equal(tb, []byte{1, 2, 3, 4}, []byte{1, 3, 2, 4})
})
assertOk(t, "Identical strings", func(tb testing.TB) {
tb.Helper()
Equal(tb, "example", "example")
})
assertFail(t, "Identical strings", func(tb testing.TB) {
tb.Helper()
Equal(tb, "example", "elpmaxe")
})
}
func TestError(t *testing.T) {
assertOk(t, "Error", func(tb testing.TB) {
tb.Helper()
Error(tb, errors.New("example"))
})
assertFail(t, "Nil", func(tb testing.TB) {
tb.Helper()
Error(tb, nil)
})
}
func TestNoError(t *testing.T) {
assertFail(t, "Error", func(tb testing.TB) {
tb.Helper()
NoError(tb, errors.New("example"))
})
assertOk(t, "Nil", func(tb testing.TB) {
tb.Helper()
NoError(tb, nil)
})
}
func TestPanics(t *testing.T) {
willPanic := func() { panic("example") }
wontPanic := func() {}
assertOk(t, "Will panic", func(tb testing.TB) {
tb.Helper()
Panics(tb, willPanic)
})
assertFail(t, "Won't panic", func(tb testing.TB) {
tb.Helper()
Panics(tb, wontPanic)
})
}
func TestZero(t *testing.T) {
assertOk(t, "Empty struct", func(tb testing.TB) {
tb.Helper()
Zero(tb, Data{})
})
assertFail(t, "Non-empty struct", func(tb testing.TB) {
tb.Helper()
Zero(tb, Data{Label: "example"})
})
assertOk(t, "Nil slice", func(tb testing.TB) {
tb.Helper()
var slice []int
Zero(tb, slice)
})
assertFail(t, "Non-empty slice", func(tb testing.TB) {
tb.Helper()
slice := []int{1, 2, 3, 4}
Zero(tb, slice)
})
assertOk(t, "Zero-length slice", func(tb testing.TB) {
tb.Helper()
slice := []int{}
Zero(tb, slice)
})
}
func TestNotZero(t *testing.T) {
assertFail(t, "Empty struct", func(tb testing.TB) {
tb.Helper()
zero := Data{}
NotZero(tb, zero)
})
assertOk(t, "Non-empty struct", func(tb testing.TB) {
tb.Helper()
notZero := Data{Label: "example"}
NotZero(tb, notZero)
})
assertFail(t, "Nil slice", func(tb testing.TB) {
tb.Helper()
var slice []int
NotZero(tb, slice)
})
assertFail(t, "Zero-length slice", func(tb testing.TB) {
tb.Helper()
slice := []int{}
NotZero(tb, slice)
})
assertOk(t, "Non-empty slice", func(tb testing.TB) {
tb.Helper()
slice := []int{1, 2, 3, 4}
NotZero(tb, slice)
})
}
type testCase struct {
*testing.T
failed string
}
func (t *testCase) Fatal(args ...interface{}) {
t.failed = fmt.Sprint(args...)
}
func (t *testCase) Fatalf(message string, args ...interface{}) {
t.failed = fmt.Sprintf(message, args...)
}
func assertFail(t *testing.T, name string, fn func(testing.TB)) {
t.Helper()
t.Run(name, func(t *testing.T) {
t.Helper()
test := &testCase{T: t}
fn(test)
if test.failed == "" {
t.Fatal("Test expected to fail but did not")
} else {
t.Log(test.failed)
}
})
}
func assertOk(t *testing.T, name string, fn func(testing.TB)) {
t.Helper()
t.Run(name, func(t *testing.T) {
t.Helper()
test := &testCase{T: t}
fn(test)
if test.failed != "" {
t.Fatal("Test expected to succeed but did not:\n", test.failed)
}
})
}
+3 -3
View File
@@ -1,6 +1,6 @@
package characters package characters
var invalidAsciiTable = [256]bool{ var invalidASCIITable = [256]bool{
0x00: true, 0x00: true,
0x01: true, 0x01: true,
0x02: true, 0x02: true,
@@ -37,6 +37,6 @@ var invalidAsciiTable = [256]bool{
0x7F: true, 0x7F: true,
} }
func InvalidAscii(b byte) bool { func InvalidASCII(b byte) bool {
return invalidAsciiTable[b] return invalidASCIITable[b]
} }
+22 -46
View File
@@ -1,20 +1,12 @@
// Package characters provides functions for working with string encodings.
package characters package characters
import ( import (
"unicode/utf8" "unicode/utf8"
) )
type utf8Err struct { // Utf8TomlValidAlreadyEscaped verifies that a given string is only made of
Index int // valid UTF-8 characters allowed by the TOML spec:
Size int
}
func (u utf8Err) Zero() bool {
return u.Size == 0
}
// Verified that a given string is only made of valid UTF-8 characters allowed
// by the TOML spec:
// //
// Any Unicode character may be used except those that must be escaped: // Any Unicode character may be used except those that must be escaped:
// quotation mark, backslash, and the control characters other than tab (U+0000 // quotation mark, backslash, and the control characters other than tab (U+0000
@@ -23,8 +15,8 @@ func (u utf8Err) Zero() bool {
// It is a copy of the Go 1.17 utf8.Valid implementation, tweaked to exit early // It is a copy of the Go 1.17 utf8.Valid implementation, tweaked to exit early
// when a character is not allowed. // when a character is not allowed.
// //
// The returned utf8Err is Zero() if the string is valid, or contains the byte // The returned slice is empty if the string is valid, or contains the bytes
// index and size of the invalid character. // of the invalid character.
// //
// quotation mark => already checked // quotation mark => already checked
// backslash => already checked // backslash => already checked
@@ -32,9 +24,8 @@ func (u utf8Err) Zero() bool {
// 0x9 => tab, ok // 0x9 => tab, ok
// 0xA - 0x1F => invalid // 0xA - 0x1F => invalid
// 0x7F => invalid // 0x7F => invalid
func Utf8TomlValidAlreadyEscaped(p []byte) (err utf8Err) { func Utf8TomlValidAlreadyEscaped(p []byte) []byte {
// Fast path. Check for and skip 8 bytes of ASCII characters per iteration. // Fast path. Check for and skip 8 bytes of ASCII characters per iteration.
offset := 0
for len(p) >= 8 { for len(p) >= 8 {
// Combining two 32 bit loads allows the same code to be used // Combining two 32 bit loads allows the same code to be used
// for 32 and 64 bit platforms. // for 32 and 64 bit platforms.
@@ -48,24 +39,19 @@ func Utf8TomlValidAlreadyEscaped(p []byte) (err utf8Err) {
} }
for i, b := range p[:8] { for i, b := range p[:8] {
if InvalidAscii(b) { if InvalidASCII(b) {
err.Index = offset + i return p[i : i+1]
err.Size = 1
return
} }
} }
p = p[8:] p = p[8:]
offset += 8
} }
n := len(p) n := len(p)
for i := 0; i < n; { for i := 0; i < n; {
pi := p[i] pi := p[i]
if pi < utf8.RuneSelf { if pi < utf8.RuneSelf {
if InvalidAscii(pi) { if InvalidASCII(pi) {
err.Index = offset + i return p[i : i+1]
err.Size = 1
return
} }
i++ i++
continue continue
@@ -73,44 +59,34 @@ func Utf8TomlValidAlreadyEscaped(p []byte) (err utf8Err) {
x := first[pi] x := first[pi]
if x == xx { if x == xx {
// Illegal starter byte. // Illegal starter byte.
err.Index = offset + i return p[i : i+1]
err.Size = 1
return
} }
size := int(x & 7) size := int(x & 7)
if i+size > n { if i+size > n {
// Short or invalid. // Short or invalid.
err.Index = offset + i return p[i:n]
err.Size = n - i
return
} }
accept := acceptRanges[x>>4] accept := acceptRanges[x>>4]
if c := p[i+1]; c < accept.lo || accept.hi < c { if c := p[i+1]; c < accept.lo || accept.hi < c {
err.Index = offset + i return p[i : i+2]
err.Size = 2 } else if size == 2 { //revive:disable:empty-block
return
} else if size == 2 {
} else if c := p[i+2]; c < locb || hicb < c { } else if c := p[i+2]; c < locb || hicb < c {
err.Index = offset + i return p[i : i+3]
err.Size = 3 } else if size == 3 { //revive:disable:empty-block
return
} else if size == 3 {
} else if c := p[i+3]; c < locb || hicb < c { } else if c := p[i+3]; c < locb || hicb < c {
err.Index = offset + i return p[i : i+4]
err.Size = 4
return
} }
i += size i += size
} }
return return nil
} }
// Return the size of the next rune if valid, 0 otherwise. // Utf8ValidNext returns the size of the next rune if valid, 0 otherwise.
func Utf8ValidNext(p []byte) int { func Utf8ValidNext(p []byte) int {
c := p[0] c := p[0]
if c < utf8.RuneSelf { if c < utf8.RuneSelf {
if InvalidAscii(c) { if InvalidASCII(c) {
return 0 return 0
} }
return 1 return 1
@@ -129,10 +105,10 @@ func Utf8ValidNext(p []byte) int {
accept := acceptRanges[x>>4] accept := acceptRanges[x>>4]
if c := p[1]; c < accept.lo || accept.hi < c { if c := p[1]; c < accept.lo || accept.hi < c {
return 0 return 0
} else if size == 2 { } else if size == 2 { //nolint:revive
} else if c := p[2]; c < locb || hicb < c { } else if c := p[2]; c < locb || hicb < c {
return 0 return 0
} else if size == 3 { } else if size == 3 { //nolint:revive
} else if c := p[3]; c < locb || hicb < c { } else if c := p[3]; c < locb || hicb < c {
return 0 return 0
} }
+9 -10
View File
@@ -1,3 +1,4 @@
// Package cli provides common functions for command-line programs.
package cli package cli
import ( import (
@@ -6,7 +7,6 @@ import (
"flag" "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
@@ -23,22 +23,21 @@ type Program struct {
} }
func (p *Program) Execute() { func (p *Program) Execute() {
flag.Usage = func() { fmt.Fprintf(os.Stderr, p.Usage) } flag.Usage = func() { fmt.Fprint(os.Stderr, p.Usage) }
flag.Parse() flag.Parse()
os.Exit(p.main(flag.Args(), os.Stdin, os.Stdout, os.Stderr)) os.Exit(p.main(flag.Args(), os.Stdin, os.Stdout, os.Stderr))
} }
func (p *Program) main(files []string, input io.Reader, output, error io.Writer) int { func (p *Program) main(files []string, input io.Reader, output, stderr io.Writer) int {
err := p.run(files, input, output) err := p.run(files, input, output)
if err != nil { if err != nil {
var derr *toml.DecodeError var derr *toml.DecodeError
if errors.As(err, &derr) { if errors.As(err, &derr) {
fmt.Fprintln(error, derr.String()) _, _ = fmt.Fprintln(stderr, derr.String())
row, col := derr.Position() row, col := derr.Position()
fmt.Fprintln(error, "error occurred at row", row, "column", col) _, _ = fmt.Fprintln(stderr, "error occurred at row", row, "column", col)
} else { } else {
fmt.Fprintln(error, err.Error()) _, _ = fmt.Fprintln(stderr, err.Error())
} }
return -1 return -1
@@ -55,7 +54,7 @@ func (p *Program) run(files []string, input io.Reader, output io.Writer) error {
if err != nil { if err != nil {
return err return err
} }
defer f.Close() defer func() { _ = f.Close() }()
input = f input = f
} }
return p.Fn(input, output) return p.Fn(input, output)
@@ -72,7 +71,7 @@ func (p *Program) runAllFilesInPlace(files []string) error {
} }
func (p *Program) runFileInPlace(path string) error { func (p *Program) runFileInPlace(path string) error {
in, err := ioutil.ReadFile(path) in, err := os.ReadFile(path) // #nosec G304
if err != nil { if err != nil {
return err return err
} }
@@ -84,5 +83,5 @@ func (p *Program) runFileInPlace(path string) error {
return err return err
} }
return ioutil.WriteFile(path, out.Bytes(), 0600) return os.WriteFile(path, out.Bytes(), 0o600)
} }
+45 -51
View File
@@ -2,17 +2,15 @@ package cli
import ( import (
"bytes" "bytes"
"fmt" "errors"
"io" "io"
"io/ioutil"
"os" "os"
"path" "path"
"strings" "strings"
"testing" "testing"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/stretchr/testify/assert" "github.com/pelletier/go-toml/v2/internal/assert"
"github.com/stretchr/testify/require"
) )
func processMain(args []string, input io.Reader, stdout, stderr io.Writer, f ConvertFn) int { func processMain(args []string, input io.Reader, stdout, stderr io.Writer, f ConvertFn) int {
@@ -25,13 +23,13 @@ func TestProcessMainStdin(t *testing.T) {
stderr := new(bytes.Buffer) stderr := new(bytes.Buffer)
input := strings.NewReader("this is the input") input := strings.NewReader("this is the input")
exit := processMain([]string{}, input, stdout, stderr, func(r io.Reader, w io.Writer) error { exit := processMain([]string{}, input, stdout, stderr, func(io.Reader, io.Writer) error {
return nil return nil
}) })
assert.Equal(t, 0, exit) assert.Equal(t, 0, exit)
assert.Empty(t, stdout.String()) assert.Zero(t, stdout.String())
assert.Empty(t, stderr.String()) assert.Zero(t, stderr.String())
} }
func TestProcessMainStdinErr(t *testing.T) { func TestProcessMainStdinErr(t *testing.T) {
@@ -39,13 +37,13 @@ func TestProcessMainStdinErr(t *testing.T) {
stderr := new(bytes.Buffer) stderr := new(bytes.Buffer)
input := strings.NewReader("this is the input") input := strings.NewReader("this is the input")
exit := processMain([]string{}, input, stdout, stderr, func(r io.Reader, w io.Writer) error { exit := processMain([]string{}, input, stdout, stderr, func(io.Reader, io.Writer) error {
return fmt.Errorf("something bad") return errors.New("something bad")
}) })
assert.Equal(t, -1, exit) assert.Equal(t, -1, exit)
assert.Empty(t, stdout.String()) assert.Zero(t, stdout.String())
assert.NotEmpty(t, stderr.String()) assert.NotZero(t, stderr.String())
} }
func TestProcessMainStdinDecodeErr(t *testing.T) { func TestProcessMainStdinDecodeErr(t *testing.T) {
@@ -53,60 +51,58 @@ func TestProcessMainStdinDecodeErr(t *testing.T) {
stderr := new(bytes.Buffer) stderr := new(bytes.Buffer)
input := strings.NewReader("this is the input") input := strings.NewReader("this is the input")
exit := processMain([]string{}, input, stdout, stderr, func(r io.Reader, w io.Writer) error { exit := processMain([]string{}, input, stdout, stderr, func(io.Reader, io.Writer) error {
var v interface{} var v interface{}
return toml.Unmarshal([]byte(`qwe = 001`), &v) return toml.Unmarshal([]byte(`qwe = 001`), &v)
}) })
assert.Equal(t, -1, exit) assert.Equal(t, -1, exit)
assert.Empty(t, stdout.String()) assert.Zero(t, stdout.String())
assert.Contains(t, stderr.String(), "error occurred at") assert.True(t, strings.Contains(stderr.String(), "error occurred at"))
} }
func TestProcessMainFileExists(t *testing.T) { func TestProcessMainFileExists(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "example") tmpfile, err := os.CreateTemp(t.TempDir(), "example")
require.NoError(t, err) assert.NoError(t, err)
defer os.Remove(tmpfile.Name()) _, err = tmpfile.WriteString(`some data`)
_, err = tmpfile.Write([]byte(`some data`)) assert.NoError(t, err)
require.NoError(t, err) assert.NoError(t, tmpfile.Close())
stdout := new(bytes.Buffer) stdout := new(bytes.Buffer)
stderr := new(bytes.Buffer) stderr := new(bytes.Buffer)
exit := processMain([]string{tmpfile.Name()}, nil, stdout, stderr, func(r io.Reader, w io.Writer) error { exit := processMain([]string{tmpfile.Name()}, nil, stdout, stderr, func(io.Reader, io.Writer) error {
return nil return nil
}) })
assert.Equal(t, 0, exit) assert.Equal(t, 0, exit)
assert.Empty(t, stdout.String()) assert.Zero(t, stdout.String())
assert.Empty(t, stderr.String()) assert.Zero(t, stderr.String())
} }
func TestProcessMainFileDoesNotExist(t *testing.T) { func TestProcessMainFileDoesNotExist(t *testing.T) {
stdout := new(bytes.Buffer) stdout := new(bytes.Buffer)
stderr := new(bytes.Buffer) stderr := new(bytes.Buffer)
exit := processMain([]string{"/lets/hope/this/does/not/exist"}, nil, stdout, stderr, func(r io.Reader, w io.Writer) error { exit := processMain([]string{"/lets/hope/this/does/not/exist"}, nil, stdout, stderr, func(io.Reader, io.Writer) error {
return nil return nil
}) })
assert.Equal(t, -1, exit) assert.Equal(t, -1, exit)
assert.Empty(t, stdout.String()) assert.Zero(t, stdout.String())
assert.NotEmpty(t, stderr.String()) assert.NotZero(t, stderr.String())
} }
func TestProcessMainFilesInPlace(t *testing.T) { func TestProcessMainFilesInPlace(t *testing.T) {
dir, err := ioutil.TempDir("", "") dir := t.TempDir()
require.NoError(t, err)
defer os.RemoveAll(dir)
path1 := path.Join(dir, "file1") path1 := path.Join(dir, "file1")
path2 := path.Join(dir, "file2") path2 := path.Join(dir, "file2")
err = ioutil.WriteFile(path1, []byte("content 1"), 0600) err := os.WriteFile(path1, []byte("content 1"), 0o600)
require.NoError(t, err) assert.NoError(t, err)
err = ioutil.WriteFile(path2, []byte("content 2"), 0600) err = os.WriteFile(path2, []byte("content 2"), 0o600)
require.NoError(t, err) assert.NoError(t, err)
p := Program{ p := Program{
Fn: dummyFileFn, Fn: dummyFileFn,
@@ -115,15 +111,15 @@ func TestProcessMainFilesInPlace(t *testing.T) {
exit := p.main([]string{path1, path2}, os.Stdin, os.Stdout, os.Stderr) exit := p.main([]string{path1, path2}, os.Stdin, os.Stdout, os.Stderr)
require.Equal(t, 0, exit) assert.Equal(t, 0, exit)
v1, err := ioutil.ReadFile(path1) v1, err := os.ReadFile(path1)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, "1", string(v1)) assert.Equal(t, "1", string(v1))
v2, err := ioutil.ReadFile(path2) v2, err := os.ReadFile(path2)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, "2", string(v2)) assert.Equal(t, "2", string(v2))
} }
func TestProcessMainFilesInPlaceErrRead(t *testing.T) { func TestProcessMainFilesInPlaceErrRead(t *testing.T) {
@@ -134,35 +130,33 @@ func TestProcessMainFilesInPlaceErrRead(t *testing.T) {
exit := p.main([]string{"/this/path/is/invalid"}, os.Stdin, os.Stdout, os.Stderr) exit := p.main([]string{"/this/path/is/invalid"}, os.Stdin, os.Stdout, os.Stderr)
require.Equal(t, -1, exit) assert.Equal(t, -1, exit)
} }
func TestProcessMainFilesInPlaceFailFn(t *testing.T) { func TestProcessMainFilesInPlaceFailFn(t *testing.T) {
dir, err := ioutil.TempDir("", "") dir := t.TempDir()
require.NoError(t, err)
defer os.RemoveAll(dir)
path1 := path.Join(dir, "file1") path1 := path.Join(dir, "file1")
err = ioutil.WriteFile(path1, []byte("content 1"), 0600) err := os.WriteFile(path1, []byte("content 1"), 0o600)
require.NoError(t, err) assert.NoError(t, err)
p := Program{ p := Program{
Fn: func(io.Reader, io.Writer) error { return fmt.Errorf("oh no") }, Fn: func(io.Reader, io.Writer) error { return errors.New("oh no") },
Inplace: true, Inplace: true,
} }
exit := p.main([]string{path1}, os.Stdin, os.Stdout, os.Stderr) exit := p.main([]string{path1}, os.Stdin, os.Stdout, os.Stderr)
require.Equal(t, -1, exit) assert.Equal(t, -1, exit)
v1, err := ioutil.ReadFile(path1) v1, err := os.ReadFile(path1)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, "content 1", string(v1)) assert.Equal(t, "content 1", string(v1))
} }
func dummyFileFn(r io.Reader, w io.Writer) error { func dummyFileFn(r io.Reader, w io.Writer) error {
b, err := ioutil.ReadAll(r) b, err := io.ReadAll(r)
if err != nil { if err != nil {
return err return err
} }
-65
View File
@@ -1,65 +0,0 @@
package danger
import (
"fmt"
"reflect"
"unsafe"
)
const maxInt = uintptr(int(^uint(0) >> 1))
func SubsliceOffset(data []byte, subslice []byte) int {
datap := (*reflect.SliceHeader)(unsafe.Pointer(&data))
hlp := (*reflect.SliceHeader)(unsafe.Pointer(&subslice))
if hlp.Data < datap.Data {
panic(fmt.Errorf("subslice address (%d) is before data address (%d)", hlp.Data, datap.Data))
}
offset := hlp.Data - datap.Data
if offset > maxInt {
panic(fmt.Errorf("slice offset larger than int (%d)", offset))
}
intoffset := int(offset)
if intoffset > datap.Len {
panic(fmt.Errorf("slice offset (%d) is farther than data length (%d)", intoffset, datap.Len))
}
if intoffset+hlp.Len > datap.Len {
panic(fmt.Errorf("slice ends (%d+%d) is farther than data length (%d)", intoffset, hlp.Len, datap.Len))
}
return intoffset
}
func BytesRange(start []byte, end []byte) []byte {
if start == nil || end == nil {
panic("cannot call BytesRange with nil")
}
startp := (*reflect.SliceHeader)(unsafe.Pointer(&start))
endp := (*reflect.SliceHeader)(unsafe.Pointer(&end))
if startp.Data > endp.Data {
panic(fmt.Errorf("start pointer address (%d) is after end pointer address (%d)", startp.Data, endp.Data))
}
l := startp.Len
endLen := int(endp.Data-startp.Data) + endp.Len
if endLen > l {
l = endLen
}
if l > startp.Cap {
panic(fmt.Errorf("range length is larger than capacity"))
}
return start[:l]
}
func Stride(ptr unsafe.Pointer, size uintptr, offset int) unsafe.Pointer {
// TODO: replace with unsafe.Add when Go 1.17 is released
// https://github.com/golang/go/issues/40481
return unsafe.Pointer(uintptr(ptr) + uintptr(int(size)*offset))
}
-178
View File
@@ -1,178 +0,0 @@
package danger_test
import (
"testing"
"unsafe"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/pelletier/go-toml/v2/internal/danger"
)
func TestSubsliceOffsetValid(t *testing.T) {
examples := []struct {
desc string
test func() ([]byte, []byte)
offset int
}{
{
desc: "simple",
test: func() ([]byte, []byte) {
data := []byte("hello")
return data, data[1:]
},
offset: 1,
},
}
for _, e := range examples {
t.Run(e.desc, func(t *testing.T) {
d, s := e.test()
offset := danger.SubsliceOffset(d, s)
assert.Equal(t, e.offset, offset)
})
}
}
func TestSubsliceOffsetInvalid(t *testing.T) {
examples := []struct {
desc string
test func() ([]byte, []byte)
}{
{
desc: "unrelated arrays",
test: func() ([]byte, []byte) {
return []byte("one"), []byte("two")
},
},
{
desc: "slice starts before data",
test: func() ([]byte, []byte) {
full := []byte("hello world")
return full[5:], full[1:]
},
},
{
desc: "slice starts after data",
test: func() ([]byte, []byte) {
full := []byte("hello world")
return full[:3], full[5:]
},
},
{
desc: "slice ends after data",
test: func() ([]byte, []byte) {
full := []byte("hello world")
return full[:5], full[3:8]
},
},
}
for _, e := range examples {
t.Run(e.desc, func(t *testing.T) {
d, s := e.test()
require.Panics(t, func() {
danger.SubsliceOffset(d, s)
})
})
}
}
func TestStride(t *testing.T) {
a := []byte{1, 2, 3, 4}
x := &a[1]
n := (*byte)(danger.Stride(unsafe.Pointer(x), unsafe.Sizeof(byte(0)), 1))
require.Equal(t, &a[2], n)
n = (*byte)(danger.Stride(unsafe.Pointer(x), unsafe.Sizeof(byte(0)), -1))
require.Equal(t, &a[0], n)
}
func TestBytesRange(t *testing.T) {
type fn = func() ([]byte, []byte)
examples := []struct {
desc string
test fn
expected []byte
}{
{
desc: "simple",
test: func() ([]byte, []byte) {
full := []byte("hello world")
return full[1:3], full[6:8]
},
expected: []byte("ello wo"),
},
{
desc: "full",
test: func() ([]byte, []byte) {
full := []byte("hello world")
return full[0:1], full[len(full)-1:]
},
expected: []byte("hello world"),
},
{
desc: "end before start",
test: func() ([]byte, []byte) {
full := []byte("hello world")
return full[len(full)-1:], full[0:1]
},
},
{
desc: "nils",
test: func() ([]byte, []byte) {
return nil, nil
},
},
{
desc: "nils start",
test: func() ([]byte, []byte) {
return nil, []byte("foo")
},
},
{
desc: "nils end",
test: func() ([]byte, []byte) {
return []byte("foo"), nil
},
},
{
desc: "start is end",
test: func() ([]byte, []byte) {
full := []byte("hello world")
return full[1:3], full[1:3]
},
expected: []byte("el"),
},
{
desc: "end contained in start",
test: func() ([]byte, []byte) {
full := []byte("hello world")
return full[1:7], full[2:4]
},
expected: []byte("ello w"),
},
{
desc: "different backing arrays",
test: func() ([]byte, []byte) {
one := []byte("hello world")
two := []byte("hello world")
return one, two
},
},
}
for _, e := range examples {
t.Run(e.desc, func(t *testing.T) {
start, end := e.test()
if e.expected == nil {
require.Panics(t, func() {
danger.BytesRange(start, end)
})
} else {
res := danger.BytesRange(start, end)
require.Equal(t, e.expected, res)
}
})
}
}
-23
View File
@@ -1,23 +0,0 @@
package danger
import (
"reflect"
"unsafe"
)
// typeID is used as key in encoder and decoder caches to enable using
// the optimize runtime.mapaccess2_fast64 function instead of the more
// expensive lookup if we were to use reflect.Type as map key.
//
// typeID holds the pointer to the reflect.Type value, which is unique
// in the program.
//
// https://github.com/segmentio/encoding/blob/master/json/codec.go#L59-L61
type TypeID unsafe.Pointer
func MakeTypeID(t reflect.Type) TypeID {
// reflect.Type has the fields:
// typ unsafe.Pointer
// ptr unsafe.Pointer
return TypeID((*[2]unsafe.Pointer)(unsafe.Pointer(&t))[1])
}
@@ -1,4 +1,4 @@
package imported_tests package imported_tests //revive:disable:var-naming
// Those tests have been imported from v1, but adjust to match the new // Those tests have been imported from v1, but adjust to match the new
// defaults of v2. // defaults of v2.
@@ -9,7 +9,7 @@ import (
"time" "time"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/stretchr/testify/require" "github.com/pelletier/go-toml/v2/internal/assert"
) )
func TestDocMarshal(t *testing.T) { func TestDocMarshal(t *testing.T) {
@@ -21,12 +21,12 @@ func TestDocMarshal(t *testing.T) {
Subdocs testDocSubs `toml:"subdoc"` Subdocs testDocSubs `toml:"subdoc"`
Basics testDocBasics `toml:"basic"` Basics testDocBasics `toml:"basic"`
SubDocList []testSubDoc `toml:"subdoclist"` SubDocList []testSubDoc `toml:"subdoclist"`
err int `toml:"shouldntBeHere"` err int `toml:"shouldntBeHere"` //nolint:unused
unexported int `toml:"shouldntBeHere"` unexported int `toml:"shouldntBeHere"`
Unexported2 int `toml:"-"` Unexported2 int `toml:"-"`
} }
var docData = testDoc{ docData := testDoc{
Title: "TOML Marshal Testing", Title: "TOML Marshal Testing",
unexported: 0, unexported: 0,
Unexported2: 0, Unexported2: 0,
@@ -107,13 +107,13 @@ name = 'List.Second'
` `
result, err := toml.Marshal(docData) result, err := toml.Marshal(docData)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, marshalTestToml, string(result)) assert.Equal(t, marshalTestToml, string(result))
} }
func TestBasicMarshalQuotedKey(t *testing.T) { func TestBasicMarshalQuotedKey(t *testing.T) {
result, err := toml.Marshal(quotedKeyMarshalTestData) result, err := toml.Marshal(quotedKeyMarshalTestData)
require.NoError(t, err) assert.NoError(t, err)
expected := `'Z.string-àéù' = 'Hello' expected := `'Z.string-àéù' = 'Hello'
'Yfloat-𝟘' = 3.5 'Yfloat-𝟘' = 3.5
@@ -128,8 +128,7 @@ String2 = 'Two'
String2 = 'Three' String2 = 'Three'
` `
require.Equal(t, string(expected), string(result)) assert.Equal(t, expected, string(result))
} }
func TestEmptyMarshal(t *testing.T) { func TestEmptyMarshal(t *testing.T) {
@@ -153,7 +152,7 @@ func TestEmptyMarshal(t *testing.T) {
Map: map[string]string{}, Map: map[string]string{},
} }
result, err := toml.Marshal(doc) result, err := toml.Marshal(doc)
require.NoError(t, err) assert.NoError(t, err)
expected := `title = 'Placeholder' expected := `title = 'Placeholder'
bool = false bool = false
@@ -164,7 +163,7 @@ stringlist = []
[map] [map]
` `
require.Equal(t, string(expected), string(result)) assert.Equal(t, expected, string(result))
} }
type textMarshaler struct { type textMarshaler struct {
@@ -187,13 +186,13 @@ func TestTextMarshaler(t *testing.T) {
t.Run("at root", func(t *testing.T) { t.Run("at root", func(t *testing.T) {
_, err := toml.Marshal(m) _, err := toml.Marshal(m)
// in v2 we do not allow TextMarshaler at root // in v2 we do not allow TextMarshaler at root
require.Error(t, err) assert.Error(t, err)
}) })
t.Run("leaf", func(t *testing.T) { t.Run("leaf", func(t *testing.T) {
res, err := toml.Marshal(wrap{m}) res, err := toml.Marshal(wrap{m})
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, "TM = 'Sally Fields'\n", string(res)) assert.Equal(t, "TM = 'Sally Fields'\n", string(res))
}) })
} }
@@ -1,4 +1,4 @@
package imported_tests package imported_tests //revive:disable:var-naming
// Those tests were imported directly from go-toml v1 // Those tests were imported directly from go-toml v1
// https://raw.githubusercontent.com/pelletier/go-toml/a2e52561804c6cd9392ebf0048ca64fe4af67a43/marshal_test.go // https://raw.githubusercontent.com/pelletier/go-toml/a2e52561804c6cd9392ebf0048ca64fe4af67a43/marshal_test.go
@@ -16,8 +16,7 @@ import (
"time" "time"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/stretchr/testify/assert" "github.com/pelletier/go-toml/v2/internal/assert"
"github.com/stretchr/testify/require"
) )
type basicMarshalTestStruct struct { type basicMarshalTestStruct struct {
@@ -123,7 +122,7 @@ func TestInterface(t *testing.T) {
var config Conf var config Conf
config.Inter = &NestedStruct{} config.Inter = &NestedStruct{}
err := toml.Unmarshal(doc, &config) err := toml.Unmarshal(doc, &config)
require.NoError(t, err) assert.NoError(t, err)
expected := Conf{ expected := Conf{
Name: "rui", Name: "rui",
Age: 18, Age: 18,
@@ -139,8 +138,8 @@ func TestInterface(t *testing.T) {
func TestBasicUnmarshal(t *testing.T) { func TestBasicUnmarshal(t *testing.T) {
result := basicMarshalTestStruct{} result := basicMarshalTestStruct{}
err := toml.Unmarshal(basicTestToml, &result) err := toml.Unmarshal(basicTestToml, &result)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, basicTestData, result) assert.Equal(t, basicTestData, result)
} }
type quotedKeyMarshalTestStruct struct { type quotedKeyMarshalTestStruct struct {
@@ -150,9 +149,6 @@ type quotedKeyMarshalTestStruct struct {
SubList []basicMarshalTestSubStruct `toml:"W.sublist-𝟘"` SubList []basicMarshalTestSubStruct `toml:"W.sublist-𝟘"`
} }
// TODO: Remove nolint once var is used by a test
//
//nolint:deadcode,unused,varcheck
var quotedKeyMarshalTestData = quotedKeyMarshalTestStruct{ var quotedKeyMarshalTestData = quotedKeyMarshalTestStruct{
String: "Hello", String: "Hello",
Float: 3.5, Float: 3.5,
@@ -162,7 +158,7 @@ var quotedKeyMarshalTestData = quotedKeyMarshalTestStruct{
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var quotedKeyMarshalTestToml = []byte(`"Yfloat-𝟘" = 3.5 var quotedKeyMarshalTestToml = []byte(`"Yfloat-𝟘" = 3.5
"Z.string-àéù" = "Hello" "Z.string-àéù" = "Hello"
@@ -184,11 +180,12 @@ type testDoc struct {
Subdocs testDocSubs `toml:"subdoc"` Subdocs testDocSubs `toml:"subdoc"`
Basics testDocBasics `toml:"basic"` Basics testDocBasics `toml:"basic"`
SubDocList []testSubDoc `toml:"subdoclist"` SubDocList []testSubDoc `toml:"subdoclist"`
err int `toml:"shouldntBeHere"` // nolint:structcheck,unused err int `toml:"shouldntBeHere"` //nolint:unused
unexported int `toml:"shouldntBeHere"` unexported int `toml:"shouldntBeHere"`
Unexported2 int `toml:"-"` Unexported2 int `toml:"-"`
} }
//nolint:unused
type testMapDoc struct { type testMapDoc struct {
Title string `toml:"title"` Title string `toml:"title"`
BasicMap map[string]string `toml:"basic_map"` BasicMap map[string]string `toml:"basic_map"`
@@ -275,7 +272,7 @@ var docData = testDoc{
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var mapTestDoc = testMapDoc{ var mapTestDoc = testMapDoc{
Title: "TOML Marshal Testing", Title: "TOML Marshal Testing",
BasicMap: map[string]string{ BasicMap: map[string]string{
@@ -300,7 +297,7 @@ func TestDocUnmarshal(t *testing.T) {
result := testDoc{} result := testDoc{}
err := toml.Unmarshal(marshalTestToml, &result) err := toml.Unmarshal(marshalTestToml, &result)
expected := docData expected := docData
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, expected, result) assert.Equal(t, expected, result)
} }
@@ -340,7 +337,7 @@ shouldntBeHere = 2
func TestUnexportedUnmarshal(t *testing.T) { func TestUnexportedUnmarshal(t *testing.T) {
result := unexportedMarshalTestStruct{} result := unexportedMarshalTestStruct{}
err := toml.Unmarshal(unexportedTestToml, &result) err := toml.Unmarshal(unexportedTestToml, &result)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, unexportedTestData, result) assert.Equal(t, unexportedTestData, result)
} }
@@ -456,7 +453,7 @@ func TestEmptytomlUnmarshal(t *testing.T) {
result := emptyMarshalTestStruct{} result := emptyMarshalTestStruct{}
err := toml.Unmarshal(emptyTestToml, &result) err := toml.Unmarshal(emptyTestToml, &result)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, emptyTestData, result) assert.Equal(t, emptyTestData, result)
} }
@@ -504,7 +501,7 @@ Str = "Hello"
func TestPointerUnmarshal(t *testing.T) { func TestPointerUnmarshal(t *testing.T) {
result := pointerMarshalTestStruct{} result := pointerMarshalTestStruct{}
err := toml.Unmarshal(pointerTestToml, &result) err := toml.Unmarshal(pointerTestToml, &result)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, pointerTestData, result) assert.Equal(t, pointerTestData, result)
} }
@@ -540,35 +537,39 @@ StringPtr = [["Three", "Four"]]
func TestNestedUnmarshal(t *testing.T) { func TestNestedUnmarshal(t *testing.T) {
result := nestedMarshalTestStruct{} result := nestedMarshalTestStruct{}
err := toml.Unmarshal(nestedTestToml, &result) err := toml.Unmarshal(nestedTestToml, &result)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, nestedTestData, result) assert.Equal(t, nestedTestData, result)
} }
//nolint:unused
type customMarshalerParent struct { type customMarshalerParent struct {
Self customMarshaler `toml:"me"` Self customMarshaler `toml:"me"`
Friends []customMarshaler `toml:"friends"` Friends []customMarshaler `toml:"friends"`
} }
//nolint:unused
type customMarshaler struct { type customMarshaler struct {
FirstName string FirstName string
LastName string LastName string
} }
//nolint:unused
func (c customMarshaler) MarshalTOML() ([]byte, error) { func (c customMarshaler) MarshalTOML() ([]byte, error) {
fullName := fmt.Sprintf("%s %s", c.FirstName, c.LastName) fullName := fmt.Sprintf("%s %s", c.FirstName, c.LastName)
return []byte(fullName), nil return []byte(fullName), nil
} }
//nolint:unused
var customMarshalerData = customMarshaler{FirstName: "Sally", LastName: "Fields"} var customMarshalerData = customMarshaler{FirstName: "Sally", LastName: "Fields"}
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var customMarshalerToml = []byte(`Sally Fields`) var customMarshalerToml = []byte(`Sally Fields`)
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var nestedCustomMarshalerData = customMarshalerParent{ var nestedCustomMarshalerData = customMarshalerParent{
Self: customMarshaler{FirstName: "Maiku", LastName: "Suteda"}, Self: customMarshaler{FirstName: "Maiku", LastName: "Suteda"},
Friends: []customMarshaler{customMarshalerData}, Friends: []customMarshaler{customMarshalerData},
@@ -576,7 +577,7 @@ var nestedCustomMarshalerData = customMarshalerParent{
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var nestedCustomMarshalerToml = []byte(`friends = ["Sally Fields"] var nestedCustomMarshalerToml = []byte(`friends = ["Sally Fields"]
me = "Maiku Suteda" me = "Maiku Suteda"
`) `)
@@ -591,7 +592,7 @@ func (x *IntOrString) MarshalTOML() ([]byte, error) {
s := *(*string)(x) s := *(*string)(x)
_, err := strconv.Atoi(s) _, err := strconv.Atoi(s)
if err != nil { if err != nil {
return []byte(fmt.Sprintf(`"%s"`, s)), nil return []byte(fmt.Sprintf(`"%s"`, s)), nil //nolint:nilerr
} }
return []byte(s), nil return []byte(s), nil
} }
@@ -663,7 +664,7 @@ func (m *textPointerMarshaler) MarshalText() ([]byte, error) {
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var commentTestToml = []byte(` var commentTestToml = []byte(`
# it's a comment on type # it's a comment on type
[postgres] [postgres]
@@ -688,6 +689,7 @@ var commentTestToml = []byte(`
My = "Baar" My = "Baar"
`) `)
//nolint:unused
type mapsTestStruct struct { type mapsTestStruct struct {
Simple map[string]string Simple map[string]string
Paths map[string]string Paths map[string]string
@@ -701,7 +703,7 @@ type mapsTestStruct struct {
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var mapsTestData = mapsTestStruct{ var mapsTestData = mapsTestStruct{
Simple: map[string]string{ Simple: map[string]string{
"one plus one": "two", "one plus one": "two",
@@ -725,7 +727,7 @@ var mapsTestData = mapsTestStruct{
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var mapsTestToml = []byte(` var mapsTestToml = []byte(`
[Other] [Other]
"testing" = 3.9999 "testing" = 3.9999
@@ -748,7 +750,7 @@ var mapsTestToml = []byte(`
// TODO: Remove nolint once type is used by a test // TODO: Remove nolint once type is used by a test
// //
//nolint:deadcode,unused //nolint:unused
type structArrayNoTag struct { type structArrayNoTag struct {
A struct { A struct {
B []int64 B []int64
@@ -758,7 +760,7 @@ type structArrayNoTag struct {
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var customTagTestToml = []byte(` var customTagTestToml = []byte(`
[postgres] [postgres]
password = "bvalue" password = "bvalue"
@@ -773,7 +775,7 @@ var customTagTestToml = []byte(`
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var customCommentTagTestToml = []byte(` var customCommentTagTestToml = []byte(`
# db connection # db connection
[postgres] [postgres]
@@ -787,7 +789,7 @@ var customCommentTagTestToml = []byte(`
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var customCommentedTagTestToml = []byte(` var customCommentedTagTestToml = []byte(`
[postgres] [postgres]
# password = "bvalue" # password = "bvalue"
@@ -834,7 +836,7 @@ func TestUnmarshalTabInStringAndQuotedKey(t *testing.T) {
t.Run(test.desc, func(t *testing.T) { t.Run(test.desc, func(t *testing.T) {
result := Test{} result := Test{}
err := toml.Unmarshal(test.input, &result) err := toml.Unmarshal(test.input, &result)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, test.expected, result) assert.Equal(t, test.expected, result)
}) })
} }
@@ -842,7 +844,7 @@ func TestUnmarshalTabInStringAndQuotedKey(t *testing.T) {
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var customMultilineTagTestToml = []byte(`int_slice = [ var customMultilineTagTestToml = []byte(`int_slice = [
1, 1,
2, 2,
@@ -852,7 +854,7 @@ var customMultilineTagTestToml = []byte(`int_slice = [
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var testDocBasicToml = []byte(` var testDocBasicToml = []byte(`
[document] [document]
bool_val = true bool_val = true
@@ -863,16 +865,12 @@ var testDocBasicToml = []byte(`
uint_val = 5001 uint_val = 5001
`) `)
// TODO: Remove nolint once type is used by a test //nolint:unused
//
//nolint:deadcode
type testDocCustomTag struct { type testDocCustomTag struct {
Doc testDocBasicsCustomTag `file:"document"` Doc testDocBasicsCustomTag `file:"document"`
} }
// TODO: Remove nolint once type is used by a test //nolint:unused
//
//nolint:deadcode
type testDocBasicsCustomTag struct { type testDocBasicsCustomTag struct {
Bool bool `file:"bool_val"` Bool bool `file:"bool_val"`
Date time.Time `file:"date_val"` Date time.Time `file:"date_val"`
@@ -883,9 +881,7 @@ type testDocBasicsCustomTag struct {
unexported int `file:"shouldntBeHere"` unexported int `file:"shouldntBeHere"`
} }
// TODO: Remove nolint once var is used by a test //nolint:unused
//
//nolint:deadcode,varcheck
var testDocCustomTagData = testDocCustomTag{ var testDocCustomTagData = testDocCustomTag{
Doc: testDocBasicsCustomTag{ Doc: testDocBasicsCustomTag{
Bool: true, Bool: true,
@@ -963,7 +959,7 @@ func TestUnmarshalTypeTableHeader(t *testing.T) {
} }
expected := map[header]map[string]int{ expected := map[header]map[string]int{
"test": map[string]int{"a": 1}, "test": {"a": 1},
} }
if !reflect.DeepEqual(result, expected) { if !reflect.DeepEqual(result, expected) {
@@ -988,13 +984,13 @@ func TestUnmarshalInvalidPointerKind(t *testing.T) {
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused //nolint:unused
type testDuration struct { type testDuration struct {
Nanosec time.Duration `toml:"nanosec"` Nanosec time.Duration `toml:"nanosec"`
Microsec1 time.Duration `toml:"microsec1"` Microsec1 time.Duration `toml:"microsec1"`
Microsec2 *time.Duration `toml:"microsec2"` Microsec2 *time.Duration `toml:"microsec2"`
Millisec time.Duration `toml:"millisec"` Millisec time.Duration `toml:"millisec"`
Sec time.Duration `toml:"sec"` Sec time.Duration `toml:"sec"` //nolint:staticcheck
Min time.Duration `toml:"min"` Min time.Duration `toml:"min"`
Hour time.Duration `toml:"hour"` Hour time.Duration `toml:"hour"`
Mixed time.Duration `toml:"mixed"` Mixed time.Duration `toml:"mixed"`
@@ -1003,7 +999,7 @@ type testDuration struct {
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var testDurationToml = []byte(` var testDurationToml = []byte(`
nanosec = "1ns" nanosec = "1ns"
microsec1 = "1us" microsec1 = "1us"
@@ -1018,7 +1014,7 @@ a_string = "15s"
// TODO: Remove nolint once var is used by a test // TODO: Remove nolint once var is used by a test
// //
//nolint:deadcode,unused,varcheck //nolint:unused
var testDurationToml2 = []byte(`a_string = "15s" var testDurationToml2 = []byte(`a_string = "15s"
hour = "1h0m0s" hour = "1h0m0s"
microsec1 = "1µs" microsec1 = "1µs"
@@ -1032,15 +1028,14 @@ sec = "1s"
// TODO: Remove nolint once type is used by a test // TODO: Remove nolint once type is used by a test
// //
//nolint:deadcode,unused //nolint:unused
type testBadDuration struct { type testBadDuration struct {
Val time.Duration `toml:"val"` Val time.Duration `toml:"val"`
} }
// TODO: add back camelCase test // TODO: add back camelCase test
var testCamelCaseKeyToml = []byte(`fooBar = 10`) //nolint:unused var testCamelCaseKeyToml = []byte(`fooBar = 10`)
//nolint:unused
func TestUnmarshalCamelCaseKey(t *testing.T) { func TestUnmarshalCamelCaseKey(t *testing.T) {
t.Skipf("don't know if it is a good idea to automatically convert like that yet") t.Skipf("don't know if it is a good idea to automatically convert like that yet")
var x struct { var x struct {
@@ -1059,7 +1054,7 @@ func TestUnmarshalCamelCaseKey(t *testing.T) {
func TestUnmarshalNegativeUint(t *testing.T) { func TestUnmarshalNegativeUint(t *testing.T) {
t.Skipf("not sure if we this should always error") t.Skipf("not sure if we this should always error")
type check struct{ U uint } // nolint:unused type check struct{ U uint }
err := toml.Unmarshal([]byte("U = -1"), &check{}) err := toml.Unmarshal([]byte("U = -1"), &check{})
assert.Error(t, err) assert.Error(t, err)
} }
@@ -1090,7 +1085,7 @@ func TestUnmarshalCheckConversionFloatInt(t *testing.T) {
for _, test := range testCases { for _, test := range testCases {
t.Run(test.desc, func(t *testing.T) { t.Run(test.desc, func(t *testing.T) {
err := toml.Unmarshal([]byte(test.input), &conversionCheck{}) err := toml.Unmarshal([]byte(test.input), &conversionCheck{})
require.Error(t, err) assert.Error(t, err)
}) })
} }
} }
@@ -1125,7 +1120,7 @@ func TestUnmarshalOverflow(t *testing.T) {
for _, test := range testCases { for _, test := range testCases {
t.Run(test.desc, func(t *testing.T) { t.Run(test.desc, func(t *testing.T) {
err := toml.Unmarshal([]byte(test.input), &overflow{}) err := toml.Unmarshal([]byte(test.input), &overflow{})
require.Error(t, err) assert.Error(t, err)
}) })
} }
} }
@@ -1536,7 +1531,7 @@ func TestUnmarshalLocalDateTime(t *testing.T) {
} }
for i, example := range examples { for i, example := range examples {
doc := fmt.Sprintf(`date = %s`, example.in) doc := "date = " + example.in
t.Run(fmt.Sprintf("ToLocalDateTime_%d_%s", i, example.name), func(t *testing.T) { t.Run(fmt.Sprintf("ToLocalDateTime_%d_%s", i, example.name), func(t *testing.T) {
type dateStruct struct { type dateStruct struct {
@@ -1622,7 +1617,7 @@ func TestUnmarshalLocalTime(t *testing.T) {
} }
for i, example := range examples { for i, example := range examples {
doc := fmt.Sprintf(`Time = %s`, example.in) doc := "Time = " + example.in
t.Run(fmt.Sprintf("ToLocalTime_%d_%s", i, example.name), func(t *testing.T) { t.Run(fmt.Sprintf("ToLocalTime_%d_%s", i, example.name), func(t *testing.T) {
type dateStruct struct { type dateStruct struct {
@@ -1745,7 +1740,7 @@ Age = 23
} }
actual := OuterStruct{} actual := OuterStruct{}
err := toml.Unmarshal(doc, &actual) err := toml.Unmarshal(doc, &actual)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, expected, actual) assert.Equal(t, expected, actual)
} }
@@ -1830,7 +1825,7 @@ InnerField = "After4"
} }
err := toml.Unmarshal(doc, &actual) err := toml.Unmarshal(doc, &actual)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, expected, actual) assert.Equal(t, expected, actual)
} }
@@ -1879,7 +1874,7 @@ type arrayTooSmallStruct struct {
func TestUnmarshalSlice(t *testing.T) { func TestUnmarshalSlice(t *testing.T) {
var actual sliceStruct var actual sliceStruct
err := toml.Unmarshal(sliceTomlDemo, &actual) err := toml.Unmarshal(sliceTomlDemo, &actual)
require.NoError(t, err) assert.NoError(t, err)
expected := sliceStruct{ expected := sliceStruct{
Slice: []string{"Howdy", "Hey There"}, Slice: []string{"Howdy", "Hey There"},
SlicePtr: &[]string{"Howdy", "Hey There"}, SlicePtr: &[]string{"Howdy", "Hey There"},
@@ -1907,19 +1902,12 @@ func TestUnmarshalMixedTypeSlice(t *testing.T) {
ArrayField []interface{} ArrayField []interface{}
} }
//doc := []byte(`ArrayField = [3.14,100,true,"hello world",{Field = "inner1"},[{Field = "inner2"},{Field = "inner3"}]]
//`)
doc := []byte(`ArrayField = [{Field = "inner1"},[{Field = "inner2"},{Field = "inner3"}]] doc := []byte(`ArrayField = [{Field = "inner1"},[{Field = "inner2"},{Field = "inner3"}]]
`) `)
actual := TestStruct{} actual := TestStruct{}
expected := TestStruct{ expected := TestStruct{
ArrayField: []interface{}{ ArrayField: []interface{}{
//3.14,
//int64(100),
//true,
//"hello world",
map[string]interface{}{ map[string]interface{}{
"Field": "inner1", "Field": "inner1",
}, },
@@ -1930,7 +1918,7 @@ func TestUnmarshalMixedTypeSlice(t *testing.T) {
}, },
} }
err := toml.Unmarshal(doc, &actual) err := toml.Unmarshal(doc, &actual)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, expected, actual) assert.Equal(t, expected, actual)
} }
@@ -1939,7 +1927,7 @@ func TestUnmarshalArray(t *testing.T) {
var actual arrayStruct var actual arrayStruct
err = toml.Unmarshal(sliceTomlDemo, &actual) err = toml.Unmarshal(sliceTomlDemo, &actual)
require.NoError(t, err) assert.NoError(t, err)
expected := arrayStruct{ expected := arrayStruct{
Slice: [4]string{"Howdy", "Hey There"}, Slice: [4]string{"Howdy", "Hey There"},
@@ -1998,11 +1986,17 @@ func TestDecoderStrict(t *testing.T) {
} }
err := strictDecoder(input).Decode(&doc) err := strictDecoder(input).Decode(&doc)
require.Error(t, err) assert.Error(t, err)
require.IsType(t, &toml.StrictMissingError{}, err)
se := err.(*toml.StrictMissingError)
keys := []toml.Key{} assert.Equal(t,
reflect.TypeOf(err), reflect.TypeOf(&toml.StrictMissingError{}),
"Expected a *toml.StrictMissingError, got: %v", reflect.TypeOf(err),
)
var se *toml.StrictMissingError
assert.True(t, errors.As(err, &se))
keys := make([]toml.Key, 0, len(se.Errors))
for _, e := range se.Errors { for _, e := range se.Errors {
keys = append(keys, e.Key()) keys = append(keys, e.Key())
@@ -2015,13 +2009,14 @@ func TestDecoderStrict(t *testing.T) {
{"undecoded", "array"}, {"undecoded", "array"},
} }
require.Equal(t, expectedKeys, keys) assert.Equal(t, expectedKeys, keys)
err = decoder(input).Decode(&doc) err = decoder(input).Decode(&doc)
require.NoError(t, err) assert.NoError(t, err)
var m map[string]interface{} var m map[string]interface{}
err = decoder(input).Decode(&m) err = decoder(input).Decode(&m)
assert.NoError(t, err)
} }
func TestDecoderStrictValid(t *testing.T) { func TestDecoderStrictValid(t *testing.T) {
@@ -2036,7 +2031,7 @@ func TestDecoderStrictValid(t *testing.T) {
} }
err := strictDecoder(input).Decode(&doc) err := strictDecoder(input).Decode(&doc)
require.NoError(t, err) assert.NoError(t, err)
} }
type docUnmarshalTOML struct { type docUnmarshalTOML struct {
@@ -2058,19 +2053,6 @@ func (d *docUnmarshalTOML) UnmarshalTOML(i interface{}) error {
return nil return nil
} }
func TestDecoderStrictCustomUnmarshal(t *testing.T) {
t.Skip()
//input := `key = "ok"`
//var doc docUnmarshalTOML
//err := NewDecoder(bytes.NewReader([]byte(input))).Strict(true).Decode(&doc)
//if err != nil {
// t.Fatal("unexpected error:", err)
//}
//if doc.Decoded.Key != "ok" {
// t.Errorf("Bad unmarshal: expected ok, got %v", doc.Decoded.Key)
//}
}
type parent struct { type parent struct {
Doc docUnmarshalTOML Doc docUnmarshalTOML
DocPointer *docUnmarshalTOML DocPointer *docUnmarshalTOML
@@ -2087,7 +2069,7 @@ func TestCustomUnmarshal(t *testing.T) {
var d parent var d parent
err := toml.Unmarshal([]byte(input), &d) err := toml.Unmarshal([]byte(input), &d)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "ok1", d.Doc.Decoded.Key) assert.Equal(t, "ok1", d.Doc.Decoded.Key)
assert.Equal(t, "ok2", d.DocPointer.Decoded.Key) assert.Equal(t, "ok2", d.DocPointer.Decoded.Key)
} }
@@ -2153,7 +2135,7 @@ Int = 21
Float = 2.0 Float = 2.0
` `
err := toml.Unmarshal([]byte(input), &doc) err := toml.Unmarshal([]byte(input), &doc)
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, 12, doc.UnixTime.Value) assert.Equal(t, 12, doc.UnixTime.Value)
assert.Equal(t, 42, doc.Version.Value) assert.Equal(t, 42, doc.Version.Value)
assert.Equal(t, 1, doc.Bool.Value) assert.Equal(t, 1, doc.Bool.Value)
@@ -2223,7 +2205,10 @@ func TestUnmarshalEmptyInterface(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
require.IsType(t, map[string]interface{}{}, v) assert.Equal(t,
reflect.TypeOf(map[string]interface{}{}), reflect.TypeOf(v),
"Expected map[string]interface{}{} type, got: %v", reflect.TypeOf(v),
)
x := v.(map[string]interface{}) x := v.(map[string]interface{})
assert.Equal(t, "pelletier", x["User"]) assert.Equal(t, "pelletier", x["User"])
@@ -2271,12 +2256,12 @@ type Custom struct {
v string v string
} }
func (c *Custom) UnmarshalTOML(v interface{}) error { func (c *Custom) UnmarshalTOML(interface{}) error {
c.v = "called" c.v = "called"
return nil return nil
} }
func TestGithubIssue431(t *testing.T) { func TestGitHubIssue431(t *testing.T) {
doc := `key = "value"` doc := `key = "value"`
var c Config var c Config
if err := toml.Unmarshal([]byte(doc), &c); err != nil { if err := toml.Unmarshal([]byte(doc), &c); err != nil {
@@ -2296,14 +2281,14 @@ type durationString struct {
time.Duration time.Duration
} }
func (d *durationString) UnmarshalTOML(v interface{}) error { func (d *durationString) UnmarshalTOML(interface{}) error {
d.Duration = 10 * time.Second d.Duration = 10 * time.Second
return nil return nil
} }
type config437Error struct{} type config437Error struct{}
func (e *config437Error) UnmarshalTOML(v interface{}) error { func (e *config437Error) UnmarshalTOML(interface{}) error {
return errors.New("expected") return errors.New("expected")
} }
@@ -2314,7 +2299,7 @@ type config437 struct {
} `toml:"HTTP"` } `toml:"HTTP"`
} }
func TestGithubIssue437(t *testing.T) { func TestGitHubIssue437(t *testing.T) {
t.Skipf("unmarshalTOML not implemented") t.Skipf("unmarshalTOML not implemented")
src := ` src := `
[HTTP] [HTTP]
+8 -7
View File
@@ -3,17 +3,18 @@ package testsuite
import ( import (
"fmt" "fmt"
"math" "math"
"strconv"
"time" "time"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
) )
// addTag adds JSON tags to a data structure as expected by toml-test. // addTag adds JSON tags to a data structure as expected by toml-test.
func addTag(key string, tomlData interface{}) interface{} { func addTag(tomlData interface{}) interface{} {
// Switch on the data type. // Switch on the data type.
switch orig := tomlData.(type) { switch orig := tomlData.(type) {
default: default:
//return map[string]interface{}{} // return map[string]interface{}{}
panic(fmt.Sprintf("Unknown type: %T", tomlData)) panic(fmt.Sprintf("Unknown type: %T", tomlData))
// A table: we don't need to add any tags, just recurse for every table // A table: we don't need to add any tags, just recurse for every table
@@ -21,7 +22,7 @@ func addTag(key string, tomlData interface{}) interface{} {
case map[string]interface{}: case map[string]interface{}:
typed := make(map[string]interface{}, len(orig)) typed := make(map[string]interface{}, len(orig))
for k, v := range orig { for k, v := range orig {
typed[k] = addTag(k, v) typed[k] = addTag(v)
} }
return typed return typed
@@ -30,13 +31,13 @@ func addTag(key string, tomlData interface{}) interface{} {
case []map[string]interface{}: case []map[string]interface{}:
typed := make([]map[string]interface{}, len(orig)) typed := make([]map[string]interface{}, len(orig))
for i, v := range orig { for i, v := range orig {
typed[i] = addTag("", v).(map[string]interface{}) typed[i] = addTag(v).(map[string]interface{})
} }
return typed return typed
case []interface{}: case []interface{}:
typed := make([]interface{}, len(orig)) typed := make([]interface{}, len(orig))
for i, v := range orig { for i, v := range orig {
typed[i] = addTag("", v) typed[i] = addTag(v)
} }
return typed return typed
@@ -52,11 +53,11 @@ func addTag(key string, tomlData interface{}) interface{} {
// Tag primitive values: bool, string, int, and float64. // Tag primitive values: bool, string, int, and float64.
case bool: case bool:
return tag("bool", fmt.Sprintf("%v", orig)) return tag("bool", strconv.FormatBool(orig))
case string: case string:
return tag("string", orig) return tag("string", orig)
case int64: case int64:
return tag("integer", fmt.Sprintf("%d", orig)) return tag("integer", strconv.FormatInt(orig, 10))
case float64: case float64:
// Special case for nan since NaN == NaN is false. // Special case for nan since NaN == NaN is false.
if math.IsNaN(orig) { if math.IsNaN(orig) {
+10 -10
View File
@@ -9,6 +9,7 @@ import (
) )
func CmpJSON(t *testing.T, key string, want, have interface{}) { func CmpJSON(t *testing.T, key string, want, have interface{}) {
t.Helper()
switch w := want.(type) { switch w := want.(type) {
case map[string]interface{}: case map[string]interface{}:
cmpJSONMaps(t, key, w, have) cmpJSONMaps(t, key, w, have)
@@ -22,6 +23,7 @@ func CmpJSON(t *testing.T, key string, want, have interface{}) {
} }
func cmpJSONMaps(t *testing.T, key string, want map[string]interface{}, have interface{}) { func cmpJSONMaps(t *testing.T, key string, want map[string]interface{}, have interface{}) {
t.Helper()
haveMap, ok := have.(map[string]interface{}) haveMap, ok := have.(map[string]interface{})
if !ok { if !ok {
mismatch(t, key, "table", want, haveMap) mismatch(t, key, "table", want, haveMap)
@@ -61,6 +63,7 @@ func cmpJSONMaps(t *testing.T, key string, want map[string]interface{}, have int
} }
func cmpJSONArrays(t *testing.T, key string, want, have interface{}) { func cmpJSONArrays(t *testing.T, key string, want, have interface{}) {
t.Helper()
wantSlice, ok := want.([]interface{}) wantSlice, ok := want.([]interface{})
if !ok { if !ok {
panic(fmt.Sprintf("'value' should be a JSON array when 'type=array', but it is a %T", want)) panic(fmt.Sprintf("'value' should be a JSON array when 'type=array', but it is a %T", want))
@@ -83,6 +86,7 @@ func cmpJSONArrays(t *testing.T, key string, want, have interface{}) {
} }
func cmpJSONValues(t *testing.T, key string, want, have map[string]interface{}) { func cmpJSONValues(t *testing.T, key string, want, have map[string]interface{}) {
t.Helper()
wantType, ok := want["type"].(string) wantType, ok := want["type"].(string)
if !ok { if !ok {
panic(fmt.Sprintf("'type' should be a string, but it is a %T", want["type"])) panic(fmt.Sprintf("'type' should be a string, but it is a %T", want["type"]))
@@ -126,6 +130,7 @@ func cmpJSONValues(t *testing.T, key string, want, have map[string]interface{})
} }
func cmpAsStrings(t *testing.T, key string, want, have string) { func cmpAsStrings(t *testing.T, key string, want, have string) {
t.Helper()
if want != have { if want != have {
t.Fatalf("Values for key '%s' don't match:\n"+ t.Fatalf("Values for key '%s' don't match:\n"+
" Expected: %s\n"+ " Expected: %s\n"+
@@ -135,6 +140,7 @@ func cmpAsStrings(t *testing.T, key string, want, have string) {
} }
func cmpFloats(t *testing.T, key string, want, have string) { func cmpFloats(t *testing.T, key string, want, have string) {
t.Helper()
// Special case for NaN, since NaN != NaN. // Special case for NaN, since NaN != NaN.
if strings.HasSuffix(want, "nan") || strings.HasSuffix(have, "nan") { if strings.HasSuffix(want, "nan") || strings.HasSuffix(have, "nan") {
if want != have { if want != have {
@@ -177,6 +183,7 @@ var layouts = map[string]string{
} }
func cmpAsDatetimes(t *testing.T, key string, kind, want, have string) { func cmpAsDatetimes(t *testing.T, key string, kind, want, have string) {
t.Helper()
layout, ok := layouts[kind] layout, ok := layouts[kind]
if !ok { if !ok {
panic("should never happen") panic("should never happen")
@@ -200,15 +207,6 @@ func cmpAsDatetimes(t *testing.T, key string, kind, want, have string) {
} }
} }
func cmpAsDatetimesLocal(t *testing.T, key string, want, have string) {
if datetimeRepl.Replace(want) != datetimeRepl.Replace(have) {
t.Fatalf("Values for key '%s' don't match:\n"+
" Expected: %v\n"+
" Your encoder: %v",
key, want, have)
}
}
func kjoin(old, key string) string { func kjoin(old, key string) string {
if len(old) == 0 { if len(old) == 0 {
return key return key
@@ -230,6 +228,7 @@ func isValue(m map[string]interface{}) bool {
} }
func mismatch(t *testing.T, key string, wantType string, want, have interface{}) { func mismatch(t *testing.T, key string, wantType string, want, have interface{}) {
t.Helper()
t.Fatalf("Key '%s' is not an %s but %[4]T:\n"+ t.Fatalf("Key '%s' is not an %s but %[4]T:\n"+
" Expected: %#[3]v\n"+ " Expected: %#[3]v\n"+
" Your encoder: %#[4]v", " Your encoder: %#[4]v",
@@ -237,8 +236,9 @@ func mismatch(t *testing.T, key string, wantType string, want, have interface{})
} }
func valMismatch(t *testing.T, key string, wantType, haveType string, want, have interface{}) { func valMismatch(t *testing.T, key string, wantType, haveType string, want, have interface{}) {
t.Helper()
t.Fatalf("Key '%s' is not an %s but %s:\n"+ t.Fatalf("Key '%s' is not an %s but %s:\n"+
" Expected: %#[3]v\n"+ " Expected: %#[3]v\n"+
" Your encoder: %#[4]v", " Your encoder: %#[4]v",
key, wantType, want, have) key, wantType, haveType, want, have)
} }
-69
View File
@@ -1,69 +0,0 @@
package testsuite
import (
"bytes"
"encoding/json"
"fmt"
"github.com/pelletier/go-toml/v2"
)
type parser struct{}
func (p parser) Decode(input string) (output string, outputIsError bool, retErr error) {
defer func() {
if r := recover(); r != nil {
switch rr := r.(type) {
case error:
retErr = rr
default:
retErr = fmt.Errorf("%s", rr)
}
}
}()
var v interface{}
if err := toml.Unmarshal([]byte(input), &v); err != nil {
return err.Error(), true, nil
}
j, err := json.MarshalIndent(addTag("", v), "", " ")
if err != nil {
return "", false, retErr
}
return string(j), false, retErr
}
func (p parser) Encode(input string) (output string, outputIsError bool, retErr error) {
defer func() {
if r := recover(); r != nil {
switch rr := r.(type) {
case error:
retErr = rr
default:
retErr = fmt.Errorf("%s", rr)
}
}
}()
var tmp interface{}
err := json.Unmarshal([]byte(input), &tmp)
if err != nil {
return "", false, err
}
rm, err := rmTag(tmp)
if err != nil {
return err.Error(), true, retErr
}
buf := new(bytes.Buffer)
err = toml.NewEncoder(buf).Encode(rm)
if err != nil {
return err.Error(), true, retErr
}
return buf.String(), false, retErr
}
+27 -21
View File
@@ -4,10 +4,12 @@ import (
"fmt" "fmt"
"strconv" "strconv"
"time" "time"
"github.com/pelletier/go-toml/v2"
) )
// Remove JSON tags to a data structure as returned by toml-test. // Remove JSON tags to a data structure as returned by toml-test.
func rmTag(typedJson interface{}) (interface{}, error) { func rmTag(typedJSON interface{}) (interface{}, error) {
// Check if key is in the table m. // Check if key is in the table m.
in := func(key string, m map[string]interface{}) bool { in := func(key string, m map[string]interface{}) bool {
_, ok := m[key] _, ok := m[key]
@@ -15,8 +17,7 @@ func rmTag(typedJson interface{}) (interface{}, error) {
} }
// Switch on the data type. // Switch on the data type.
switch v := typedJson.(type) { switch v := typedJSON.(type) {
// Object: this can either be a TOML table or a primitive with tags. // Object: this can either be a TOML table or a primitive with tags.
case map[string]interface{}: case map[string]interface{}:
// This value represents a primitive: remove the tags and return just // This value represents a primitive: remove the tags and return just
@@ -40,7 +41,7 @@ func rmTag(typedJson interface{}) (interface{}, error) {
} }
return m, nil return m, nil
// Array: remove tags from all itenm. // Array: remove tags from all items.
case []interface{}: case []interface{}:
a := make([]interface{}, len(v)) a := make([]interface{}, len(v))
for i := range v { for i := range v {
@@ -54,7 +55,7 @@ func rmTag(typedJson interface{}) (interface{}, error) {
} }
// The top level must be an object or array. // The top level must be an object or array.
return nil, fmt.Errorf("unrecognized JSON format '%T'", typedJson) return nil, fmt.Errorf("unrecognized JSON format '%T'", typedJSON)
} }
// Return a primitive: read the "type" and convert the "value" to that. // Return a primitive: read the "type" and convert the "value" to that.
@@ -76,14 +77,31 @@ func untag(typed map[string]interface{}) (interface{}, error) {
return nil, fmt.Errorf("untag: %w", err) return nil, fmt.Errorf("untag: %w", err)
} }
return f, nil return f, nil
// toml.LocalDate{Year:2020, Month:12, Day:12}
case "datetime": case "datetime":
return parseTime(v, "2006-01-02T15:04:05.999999999Z07:00", false) return time.Parse("2006-01-02T15:04:05.999999999Z07:00", v)
case "datetime-local": case "datetime-local":
return parseTime(v, "2006-01-02T15:04:05.999999999", true) var t toml.LocalDateTime
err := t.UnmarshalText([]byte(v))
if err != nil {
return nil, fmt.Errorf("untag: %w", err)
}
return t, nil
case "date-local": case "date-local":
return parseTime(v, "2006-01-02", true) var t toml.LocalDate
err := t.UnmarshalText([]byte(v))
if err != nil {
return nil, fmt.Errorf("untag: %w", err)
}
return t, nil
case "time-local": case "time-local":
return parseTime(v, "15:04:05.999999999", true) var t toml.LocalTime
err := t.UnmarshalText([]byte(v))
if err != nil {
return nil, fmt.Errorf("untag: %w", err)
}
return t, nil
case "bool": case "bool":
switch v { switch v {
case "true": case "true":
@@ -96,15 +114,3 @@ func untag(typed map[string]interface{}) (interface{}, error) {
return nil, fmt.Errorf("untag: unrecognized tag type %q", t) return nil, fmt.Errorf("untag: unrecognized tag type %q", t)
} }
func parseTime(v, format string, local bool) (t time.Time, err error) {
if local {
t, err = time.ParseInLocation(format, v, time.Local)
} else {
t, err = time.Parse(format, v)
}
if err != nil {
return time.Time{}, fmt.Errorf("Could not parse %q as a datetime: %w", v, err)
}
return t, nil
}
+23 -5
View File
@@ -10,7 +10,7 @@ import (
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
) )
// Marshal is a helpfer function for calling toml.Marshal // Marshal is a helper function for calling toml.Marshal
// //
// Only needed to avoid package import loops. // Only needed to avoid package import loops.
func Marshal(v interface{}) ([]byte, error) { func Marshal(v interface{}) ([]byte, error) {
@@ -27,7 +27,7 @@ func Unmarshal(data []byte, v interface{}) error {
// ValueToTaggedJSON takes a data structure and returns the tagged JSON // ValueToTaggedJSON takes a data structure and returns the tagged JSON
// representation. // representation.
func ValueToTaggedJSON(doc interface{}) ([]byte, error) { func ValueToTaggedJSON(doc interface{}) ([]byte, error) {
return json.MarshalIndent(addTag("", doc), "", " ") return json.MarshalIndent(addTag(doc), "", " ")
} }
// DecodeStdin is a helper function for the toml-test binary interface. TOML input // DecodeStdin is a helper function for the toml-test binary interface. TOML input
@@ -37,14 +37,32 @@ func DecodeStdin() error {
var decoded map[string]interface{} var decoded map[string]interface{}
if err := toml.NewDecoder(os.Stdin).Decode(&decoded); err != nil { if err := toml.NewDecoder(os.Stdin).Decode(&decoded); err != nil {
return fmt.Errorf("Error decoding TOML: %s", err) return fmt.Errorf("error decoding TOML: %w", err)
} }
j := json.NewEncoder(os.Stdout) j := json.NewEncoder(os.Stdout)
j.SetIndent("", " ") j.SetIndent("", " ")
if err := j.Encode(addTag("", decoded)); err != nil { if err := j.Encode(addTag(decoded)); err != nil {
return fmt.Errorf("Error encoding JSON: %s", err) return fmt.Errorf("error encoding JSON: %w", err)
} }
return nil return nil
} }
// EncodeStdin is a helper function for the toml-test binary interface. Tagged
// JSON is read from STDIN and a resulting TOML representation is written to
// STDOUT.
func EncodeStdin() error {
var j interface{}
err := json.NewDecoder(os.Stdin).Decode(&j)
if err != nil {
return err
}
rm, err := rmTag(j)
if err != nil {
return fmt.Errorf("removing tags: %w", err)
}
return toml.NewEncoder(os.Stdout).Encode(rm)
}
+1 -1
View File
@@ -36,7 +36,7 @@ func (t *KeyTracker) Pop(node *unstable.Node) {
} }
} }
// Key returns the current key // Key returns the current key.
func (t *KeyTracker) Key() []string { func (t *KeyTracker) Key() []string {
k := make([]string, len(t.k)) k := make([]string, len(t.k))
copy(k, t.k) copy(k, t.k)
+44 -41
View File
@@ -57,7 +57,11 @@ type SeenTracker struct {
currentIdx int currentIdx int
} }
var pool sync.Pool var pool = sync.Pool{
New: func() interface{} {
return &SeenTracker{}
},
}
func (s *SeenTracker) reset() { func (s *SeenTracker) reset() {
// Always contains a root element at index 0. // Always contains a root element at index 0.
@@ -149,8 +153,9 @@ func (s *SeenTracker) setExplicitFlag(parentIdx int) {
// CheckExpression takes a top-level node and checks that it does not contain // CheckExpression takes a top-level node and checks that it does not contain
// keys that have been seen in previous calls, and validates that types are // keys that have been seen in previous calls, and validates that types are
// consistent. // consistent. It returns true if it is the first time this node's key is seen.
func (s *SeenTracker) CheckExpression(node *unstable.Node) error { // Useful to clear array tables on first use.
func (s *SeenTracker) CheckExpression(node *unstable.Node) (bool, error) {
if s.entries == nil { if s.entries == nil {
s.reset() s.reset()
} }
@@ -166,7 +171,7 @@ func (s *SeenTracker) CheckExpression(node *unstable.Node) error {
} }
} }
func (s *SeenTracker) checkTable(node *unstable.Node) error { func (s *SeenTracker) checkTable(node *unstable.Node) (bool, error) {
if s.currentIdx >= 0 { if s.currentIdx >= 0 {
s.setExplicitFlag(s.currentIdx) s.setExplicitFlag(s.currentIdx)
} }
@@ -192,7 +197,7 @@ func (s *SeenTracker) checkTable(node *unstable.Node) error {
} else { } else {
entry := s.entries[idx] entry := s.entries[idx]
if entry.kind == valueKind { if entry.kind == valueKind {
return fmt.Errorf("toml: expected %s to be a table, not a %s", string(k), entry.kind) return false, fmt.Errorf("toml: expected %s to be a table, not a %s", string(k), entry.kind)
} }
} }
parentIdx = idx parentIdx = idx
@@ -201,25 +206,27 @@ func (s *SeenTracker) checkTable(node *unstable.Node) error {
k := it.Node().Data k := it.Node().Data
idx := s.find(parentIdx, k) idx := s.find(parentIdx, k)
first := false
if idx >= 0 { if idx >= 0 {
kind := s.entries[idx].kind kind := s.entries[idx].kind
if kind != tableKind { if kind != tableKind {
return fmt.Errorf("toml: key %s should be a table, not a %s", string(k), kind) return false, fmt.Errorf("toml: key %s should be a table, not a %s", string(k), kind)
} }
if s.entries[idx].explicit { if s.entries[idx].explicit {
return fmt.Errorf("toml: table %s already exists", string(k)) return false, fmt.Errorf("toml: table %s already exists", string(k))
} }
s.entries[idx].explicit = true s.entries[idx].explicit = true
} else { } else {
idx = s.create(parentIdx, k, tableKind, true, false) idx = s.create(parentIdx, k, tableKind, true, false)
first = true
} }
s.currentIdx = idx s.currentIdx = idx
return nil return first, nil
} }
func (s *SeenTracker) checkArrayTable(node *unstable.Node) error { func (s *SeenTracker) checkArrayTable(node *unstable.Node) (bool, error) {
if s.currentIdx >= 0 { if s.currentIdx >= 0 {
s.setExplicitFlag(s.currentIdx) s.setExplicitFlag(s.currentIdx)
} }
@@ -242,7 +249,7 @@ func (s *SeenTracker) checkArrayTable(node *unstable.Node) error {
} else { } else {
entry := s.entries[idx] entry := s.entries[idx]
if entry.kind == valueKind { if entry.kind == valueKind {
return fmt.Errorf("toml: expected %s to be a table, not a %s", string(k), entry.kind) return false, fmt.Errorf("toml: expected %s to be a table, not a %s", string(k), entry.kind)
} }
} }
@@ -252,22 +259,23 @@ func (s *SeenTracker) checkArrayTable(node *unstable.Node) error {
k := it.Node().Data k := it.Node().Data
idx := s.find(parentIdx, k) idx := s.find(parentIdx, k)
if idx >= 0 { firstTime := idx < 0
if firstTime {
idx = s.create(parentIdx, k, arrayTableKind, true, false)
} else {
kind := s.entries[idx].kind kind := s.entries[idx].kind
if kind != arrayTableKind { if kind != arrayTableKind {
return fmt.Errorf("toml: key %s already exists as a %s, but should be an array table", kind, string(k)) return false, fmt.Errorf("toml: key %s already exists as a %s, but should be an array table", kind, string(k))
} }
s.clear(idx) s.clear(idx)
} else {
idx = s.create(parentIdx, k, arrayTableKind, true, false)
} }
s.currentIdx = idx s.currentIdx = idx
return nil return firstTime, nil
} }
func (s *SeenTracker) checkKeyValue(node *unstable.Node) error { func (s *SeenTracker) checkKeyValue(node *unstable.Node) (bool, error) {
parentIdx := s.currentIdx parentIdx := s.currentIdx
it := node.Key() it := node.Key()
@@ -280,12 +288,13 @@ func (s *SeenTracker) checkKeyValue(node *unstable.Node) error {
idx = s.create(parentIdx, k, tableKind, false, true) idx = s.create(parentIdx, k, tableKind, false, true)
} else { } else {
entry := s.entries[idx] entry := s.entries[idx]
if it.IsLast() { switch {
return fmt.Errorf("toml: key %s is already defined", string(k)) case it.IsLast():
} else if entry.kind != tableKind { return false, fmt.Errorf("toml: key %s is already defined", string(k))
return fmt.Errorf("toml: expected %s to be a table, not a %s", string(k), entry.kind) case entry.kind != tableKind:
} else if entry.explicit { return false, fmt.Errorf("toml: expected %s to be a table, not a %s", string(k), entry.kind)
return fmt.Errorf("toml: cannot redefine table %s that has already been explicitly defined", string(k)) case entry.explicit:
return false, fmt.Errorf("toml: cannot redefine table %s that has already been explicitly defined", string(k))
} }
} }
@@ -301,47 +310,41 @@ func (s *SeenTracker) checkKeyValue(node *unstable.Node) error {
return s.checkInlineTable(value) return s.checkInlineTable(value)
case unstable.Array: case unstable.Array:
return s.checkArray(value) return s.checkArray(value)
default:
return false, nil
} }
return nil
} }
func (s *SeenTracker) checkArray(node *unstable.Node) error { func (s *SeenTracker) checkArray(node *unstable.Node) (first bool, err error) {
it := node.Children() it := node.Children()
for it.Next() { for it.Next() {
n := it.Node() n := it.Node()
switch n.Kind { switch n.Kind { //nolint:exhaustive
case unstable.InlineTable: case unstable.InlineTable:
err := s.checkInlineTable(n) first, err = s.checkInlineTable(n)
if err != nil { if err != nil {
return err return false, err
} }
case unstable.Array: case unstable.Array:
err := s.checkArray(n) first, err = s.checkArray(n)
if err != nil { if err != nil {
return err return false, err
} }
} }
} }
return nil return first, nil
} }
func (s *SeenTracker) checkInlineTable(node *unstable.Node) error { func (s *SeenTracker) checkInlineTable(node *unstable.Node) (first bool, err error) {
if pool.New == nil {
pool.New = func() interface{} {
return &SeenTracker{}
}
}
s = pool.Get().(*SeenTracker) s = pool.Get().(*SeenTracker)
s.reset() s.reset()
it := node.Children() it := node.Children()
for it.Next() { for it.Next() {
n := it.Node() n := it.Node()
err := s.checkKeyValue(n) first, err = s.checkKeyValue(n)
if err != nil { if err != nil {
return err return false, err
} }
} }
@@ -352,5 +355,5 @@ func (s *SeenTracker) checkInlineTable(node *unstable.Node) error {
// redefinition of its keys: check* functions cannot walk into // redefinition of its keys: check* functions cannot walk into
// a value. // a value.
pool.Put(s) pool.Put(s)
return nil return first, nil
} }
+8 -3
View File
@@ -1,10 +1,10 @@
package tracker package tracker
import ( import (
"reflect"
"testing" "testing"
"unsafe"
"github.com/stretchr/testify/require" "github.com/pelletier/go-toml/v2/internal/assert"
) )
func TestEntrySize(t *testing.T) { func TestEntrySize(t *testing.T) {
@@ -12,5 +12,10 @@ func TestEntrySize(t *testing.T) {
// performance of unmarshaling documents. Should only be increased with care // performance of unmarshaling documents. Should only be increased with care
// and a very good reason. // and a very good reason.
maxExpectedEntrySize := 48 maxExpectedEntrySize := 48
require.LessOrEqual(t, int(unsafe.Sizeof(entry{})), maxExpectedEntrySize) entrySize := int(reflect.TypeOf(entry{}).Size())
assert.True(t,
entrySize <= maxExpectedEntrySize,
"Expected entry to be less than or equal to %d, got: %d",
maxExpectedEntrySize, entrySize,
)
} }
+1
View File
@@ -1 +1,2 @@
// Package tracker provides functions for keeping track of AST nodes.
package tracker package tracker
+1 -1
View File
@@ -45,7 +45,7 @@ func (d *LocalDate) UnmarshalText(b []byte) error {
type LocalTime struct { type LocalTime struct {
Hour int // Hour of the day: [0; 24[ Hour int // Hour of the day: [0; 24[
Minute int // Minute of the hour: [0; 60[ Minute int // Minute of the hour: [0; 60[
Second int // Second of the minute: [0; 60[ Second int // Second of the minute: [0; 59]
Nanosecond int // Nanoseconds within the second: [0, 1000000000[ Nanosecond int // Nanoseconds within the second: [0, 1000000000[
Precision int // Number of digits to display for Nanosecond. Precision int // Number of digits to display for Nanosecond.
} }
+28 -28
View File
@@ -5,73 +5,73 @@ import (
"time" "time"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/stretchr/testify/require" "github.com/pelletier/go-toml/v2/internal/assert"
) )
func TestLocalDate_AsTime(t *testing.T) { func TestLocalDate_AsTime(t *testing.T) {
d := toml.LocalDate{2021, 6, 8} d := toml.LocalDate{2021, 6, 8}
cast := d.AsTime(time.UTC) cast := d.AsTime(time.UTC)
require.Equal(t, time.Date(2021, time.June, 8, 0, 0, 0, 0, time.UTC), cast) assert.Equal(t, time.Date(2021, time.June, 8, 0, 0, 0, 0, time.UTC), cast)
} }
func TestLocalDate_String(t *testing.T) { func TestLocalDate_String(t *testing.T) {
d := toml.LocalDate{2021, 6, 8} d := toml.LocalDate{2021, 6, 8}
require.Equal(t, "2021-06-08", d.String()) assert.Equal(t, "2021-06-08", d.String())
} }
func TestLocalDate_MarshalText(t *testing.T) { func TestLocalDate_MarshalText(t *testing.T) {
d := toml.LocalDate{2021, 6, 8} d := toml.LocalDate{2021, 6, 8}
b, err := d.MarshalText() b, err := d.MarshalText()
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, []byte("2021-06-08"), b) assert.Equal(t, []byte("2021-06-08"), b)
} }
func TestLocalDate_UnmarshalMarshalText(t *testing.T) { func TestLocalDate_UnmarshalMarshalText(t *testing.T) {
d := toml.LocalDate{} d := toml.LocalDate{}
err := d.UnmarshalText([]byte("2021-06-08")) err := d.UnmarshalText([]byte("2021-06-08"))
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, toml.LocalDate{2021, 6, 8}, d) assert.Equal(t, toml.LocalDate{2021, 6, 8}, d)
err = d.UnmarshalText([]byte("what")) err = d.UnmarshalText([]byte("what"))
require.Error(t, err) assert.Error(t, err)
} }
func TestLocalTime_String(t *testing.T) { func TestLocalTime_String(t *testing.T) {
d := toml.LocalTime{20, 12, 1, 2, 9} d := toml.LocalTime{20, 12, 1, 2, 9}
require.Equal(t, "20:12:01.000000002", d.String()) assert.Equal(t, "20:12:01.000000002", d.String())
d = toml.LocalTime{20, 12, 1, 0, 0} d = toml.LocalTime{20, 12, 1, 0, 0}
require.Equal(t, "20:12:01", d.String()) assert.Equal(t, "20:12:01", d.String())
d = toml.LocalTime{20, 12, 1, 0, 9} d = toml.LocalTime{20, 12, 1, 0, 9}
require.Equal(t, "20:12:01.000000000", d.String()) assert.Equal(t, "20:12:01.000000000", d.String())
d = toml.LocalTime{20, 12, 1, 100, 0} d = toml.LocalTime{20, 12, 1, 100, 0}
require.Equal(t, "20:12:01.0000001", d.String()) assert.Equal(t, "20:12:01.0000001", d.String())
} }
func TestLocalTime_MarshalText(t *testing.T) { func TestLocalTime_MarshalText(t *testing.T) {
d := toml.LocalTime{20, 12, 1, 2, 9} d := toml.LocalTime{20, 12, 1, 2, 9}
b, err := d.MarshalText() b, err := d.MarshalText()
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, []byte("20:12:01.000000002"), b) assert.Equal(t, []byte("20:12:01.000000002"), b)
} }
func TestLocalTime_UnmarshalMarshalText(t *testing.T) { func TestLocalTime_UnmarshalMarshalText(t *testing.T) {
d := toml.LocalTime{} d := toml.LocalTime{}
err := d.UnmarshalText([]byte("20:12:01.000000002")) err := d.UnmarshalText([]byte("20:12:01.000000002"))
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, toml.LocalTime{20, 12, 1, 2, 9}, d) assert.Equal(t, toml.LocalTime{20, 12, 1, 2, 9}, d)
err = d.UnmarshalText([]byte("what")) err = d.UnmarshalText([]byte("what"))
require.Error(t, err) assert.Error(t, err)
err = d.UnmarshalText([]byte("20:12:01.000000002 bad")) err = d.UnmarshalText([]byte("20:12:01.000000002 bad"))
require.Error(t, err) assert.Error(t, err)
} }
func TestLocalTime_RoundTrip(t *testing.T) { func TestLocalTime_RoundTrip(t *testing.T) {
var d struct{ A toml.LocalTime } var d struct{ A toml.LocalTime }
err := toml.Unmarshal([]byte("a=20:12:01.500"), &d) err := toml.Unmarshal([]byte("a=20:12:01.500"), &d)
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, "20:12:01.500", d.A.String()) assert.Equal(t, "20:12:01.500", d.A.String())
} }
func TestLocalDateTime_AsTime(t *testing.T) { func TestLocalDateTime_AsTime(t *testing.T) {
@@ -80,7 +80,7 @@ func TestLocalDateTime_AsTime(t *testing.T) {
toml.LocalTime{20, 12, 1, 2, 9}, toml.LocalTime{20, 12, 1, 2, 9},
} }
cast := d.AsTime(time.UTC) cast := d.AsTime(time.UTC)
require.Equal(t, time.Date(2021, time.June, 8, 20, 12, 1, 2, time.UTC), cast) assert.Equal(t, time.Date(2021, time.June, 8, 20, 12, 1, 2, time.UTC), cast)
} }
func TestLocalDateTime_String(t *testing.T) { func TestLocalDateTime_String(t *testing.T) {
@@ -88,7 +88,7 @@ func TestLocalDateTime_String(t *testing.T) {
toml.LocalDate{2021, 6, 8}, toml.LocalDate{2021, 6, 8},
toml.LocalTime{20, 12, 1, 2, 9}, toml.LocalTime{20, 12, 1, 2, 9},
} }
require.Equal(t, "2021-06-08T20:12:01.000000002", d.String()) assert.Equal(t, "2021-06-08T20:12:01.000000002", d.String())
} }
func TestLocalDateTime_MarshalText(t *testing.T) { func TestLocalDateTime_MarshalText(t *testing.T) {
@@ -97,22 +97,22 @@ func TestLocalDateTime_MarshalText(t *testing.T) {
toml.LocalTime{20, 12, 1, 2, 9}, toml.LocalTime{20, 12, 1, 2, 9},
} }
b, err := d.MarshalText() b, err := d.MarshalText()
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, []byte("2021-06-08T20:12:01.000000002"), b) assert.Equal(t, []byte("2021-06-08T20:12:01.000000002"), b)
} }
func TestLocalDateTime_UnmarshalMarshalText(t *testing.T) { func TestLocalDateTime_UnmarshalMarshalText(t *testing.T) {
d := toml.LocalDateTime{} d := toml.LocalDateTime{}
err := d.UnmarshalText([]byte("2021-06-08 20:12:01.000000002")) err := d.UnmarshalText([]byte("2021-06-08 20:12:01.000000002"))
require.NoError(t, err) assert.NoError(t, err)
require.Equal(t, toml.LocalDateTime{ assert.Equal(t, toml.LocalDateTime{
toml.LocalDate{2021, 6, 8}, toml.LocalDate{2021, 6, 8},
toml.LocalTime{20, 12, 1, 2, 9}, toml.LocalTime{20, 12, 1, 2, 9},
}, d) }, d)
err = d.UnmarshalText([]byte("what")) err = d.UnmarshalText([]byte("what"))
require.Error(t, err) assert.Error(t, err)
err = d.UnmarshalText([]byte("2021-06-08 20:12:01.000000002 bad")) err = d.UnmarshalText([]byte("2021-06-08 20:12:01.000000002 bad"))
require.Error(t, err) assert.Error(t, err)
} }
+153 -44
View File
@@ -3,11 +3,13 @@ package toml
import ( import (
"bytes" "bytes"
"encoding" "encoding"
"encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
"math" "math"
"reflect" "reflect"
"sort" "slices"
"strconv" "strconv"
"strings" "strings"
"time" "time"
@@ -37,10 +39,11 @@ type Encoder struct {
w io.Writer w io.Writer
// global settings // global settings
tablesInline bool tablesInline bool
arraysMultiline bool arraysMultiline bool
indentSymbol string indentSymbol string
indentTables bool indentTables bool
marshalJSONNumbers bool
} }
// NewEncoder returns a new Encoder that writes to w. // NewEncoder returns a new Encoder that writes to w.
@@ -87,6 +90,17 @@ func (enc *Encoder) SetIndentTables(indent bool) *Encoder {
return enc return enc
} }
// SetMarshalJSONNumbers forces the encoder to serialize `json.Number` as a
// float or integer instead of relying on TextMarshaler to emit a string.
//
// *Unstable:* This method does not follow the compatibility guarantees of
// semver. It can be changed or removed without a new major version being
// issued.
func (enc *Encoder) SetMarshalJSONNumbers(indent bool) *Encoder {
enc.marshalJSONNumbers = indent
return enc
}
// Encode writes a TOML representation of v to the stream. // Encode writes a TOML representation of v to the stream.
// //
// If v cannot be represented to TOML it returns an error. // If v cannot be represented to TOML it returns an error.
@@ -148,6 +162,8 @@ func (enc *Encoder) SetIndentTables(indent bool) *Encoder {
// //
// The "omitempty" option prevents empty values or groups from being emitted. // The "omitempty" option prevents empty values or groups from being emitted.
// //
// The "omitzero" option prevents zero values or groups from being emitted.
//
// The "commented" option prefixes the value and all its children with a comment // The "commented" option prefixes the value and all its children with a comment
// symbol. // symbol.
// //
@@ -164,7 +180,7 @@ func (enc *Encoder) Encode(v interface{}) error {
ctx.inline = enc.tablesInline ctx.inline = enc.tablesInline
if v == nil { if v == nil {
return fmt.Errorf("toml: cannot encode a nil interface") return errors.New("toml: cannot encode a nil interface")
} }
b, err := enc.encode(b, ctx, reflect.ValueOf(v)) b, err := enc.encode(b, ctx, reflect.ValueOf(v))
@@ -183,6 +199,7 @@ func (enc *Encoder) Encode(v interface{}) error {
type valueOptions struct { type valueOptions struct {
multiline bool multiline bool
omitempty bool omitempty bool
omitzero bool
commented bool commented bool
comment string comment string
} }
@@ -252,10 +269,21 @@ func (enc *Encoder) encode(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, e
return append(b, x.String()...), nil return append(b, x.String()...), nil
case LocalDateTime: case LocalDateTime:
return append(b, x.String()...), nil return append(b, x.String()...), nil
case json.Number:
if enc.marshalJSONNumbers {
if x == "" { /// Useful zero value.
return append(b, "0"...), nil
} else if v, err := x.Int64(); err == nil {
return enc.encode(b, ctx, reflect.ValueOf(v))
} else if f, err := x.Float64(); err == nil {
return enc.encode(b, ctx, reflect.ValueOf(f))
}
return nil, fmt.Errorf("toml: unable to convert %q to int64 or float64", x)
}
} }
hasTextMarshaler := v.Type().Implements(textMarshalerType) hasTextMarshaler := v.Type().Implements(textMarshalerType)
if hasTextMarshaler || (v.CanAddr() && reflect.PtrTo(v.Type()).Implements(textMarshalerType)) { if hasTextMarshaler || (v.CanAddr() && reflect.PointerTo(v.Type()).Implements(textMarshalerType)) {
if !hasTextMarshaler { if !hasTextMarshaler {
v = v.Addr() v = v.Addr()
} }
@@ -284,7 +312,7 @@ func (enc *Encoder) encode(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, e
return enc.encodeSlice(b, ctx, v) return enc.encodeSlice(b, ctx, v)
case reflect.Interface: case reflect.Interface:
if v.IsNil() { if v.IsNil() {
return nil, fmt.Errorf("toml: encoding a nil interface is not supported") return nil, errors.New("toml: encoding a nil interface is not supported")
} }
return enc.encode(b, ctx, v.Elem()) return enc.encode(b, ctx, v.Elem())
@@ -301,28 +329,30 @@ func (enc *Encoder) encode(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, e
case reflect.Float32: case reflect.Float32:
f := v.Float() f := v.Float()
if math.IsNaN(f) { switch {
case math.IsNaN(f):
b = append(b, "nan"...) b = append(b, "nan"...)
} else if f > math.MaxFloat32 { case f > math.MaxFloat32:
b = append(b, "inf"...) b = append(b, "inf"...)
} else if f < -math.MaxFloat32 { case f < -math.MaxFloat32:
b = append(b, "-inf"...) b = append(b, "-inf"...)
} else if math.Trunc(f) == f { case math.Trunc(f) == f:
b = strconv.AppendFloat(b, f, 'f', 1, 32) b = strconv.AppendFloat(b, f, 'f', 1, 32)
} else { default:
b = strconv.AppendFloat(b, f, 'f', -1, 32) b = strconv.AppendFloat(b, f, 'f', -1, 32)
} }
case reflect.Float64: case reflect.Float64:
f := v.Float() f := v.Float()
if math.IsNaN(f) { switch {
case math.IsNaN(f):
b = append(b, "nan"...) b = append(b, "nan"...)
} else if f > math.MaxFloat64 { case f > math.MaxFloat64:
b = append(b, "inf"...) b = append(b, "inf"...)
} else if f < -math.MaxFloat64 { case f < -math.MaxFloat64:
b = append(b, "-inf"...) b = append(b, "-inf"...)
} else if math.Trunc(f) == f { case math.Trunc(f) == f:
b = strconv.AppendFloat(b, f, 'f', 1, 64) b = strconv.AppendFloat(b, f, 'f', 1, 64)
} else { default:
b = strconv.AppendFloat(b, f, 'f', -1, 64) b = strconv.AppendFloat(b, f, 'f', -1, 64)
} }
case reflect.Bool: case reflect.Bool:
@@ -359,6 +389,31 @@ func shouldOmitEmpty(options valueOptions, v reflect.Value) bool {
return options.omitempty && isEmptyValue(v) return options.omitempty && isEmptyValue(v)
} }
func shouldOmitZero(options valueOptions, v reflect.Value) bool {
if !options.omitzero {
return false
}
// Check if the type implements isZeroer interface (has a custom IsZero method).
if v.Type().Implements(isZeroerType) {
return v.Interface().(isZeroer).IsZero()
}
// Check if pointer type implements isZeroer.
if reflect.PointerTo(v.Type()).Implements(isZeroerType) {
if v.CanAddr() {
return v.Addr().Interface().(isZeroer).IsZero()
}
// Create a temporary addressable copy to call the pointer receiver method.
pv := reflect.New(v.Type())
pv.Elem().Set(v)
return pv.Interface().(isZeroer).IsZero()
}
// Fall back to reflect's IsZero for types without custom IsZero method.
return v.IsZero()
}
func (enc *Encoder) encodeKv(b []byte, ctx encoderCtx, options valueOptions, v reflect.Value) ([]byte, error) { func (enc *Encoder) encodeKv(b []byte, ctx encoderCtx, options valueOptions, v reflect.Value) ([]byte, error) {
var err error var err error
@@ -409,8 +464,9 @@ func isEmptyValue(v reflect.Value) bool {
return v.Float() == 0 return v.Float() == 0
case reflect.Interface, reflect.Ptr: case reflect.Interface, reflect.Ptr:
return v.IsNil() return v.IsNil()
default:
return false
} }
return false
} }
func isEmptyStruct(v reflect.Value) bool { func isEmptyStruct(v reflect.Value) bool {
@@ -454,7 +510,7 @@ func (enc *Encoder) encodeString(b []byte, v string, options valueOptions) []byt
func needsQuoting(v string) bool { func needsQuoting(v string) bool {
// TODO: vectorize // TODO: vectorize
for _, b := range []byte(v) { for _, b := range []byte(v) {
if b == '\'' || b == '\r' || b == '\n' || characters.InvalidAscii(b) { if b == '\'' || b == '\r' || b == '\n' || characters.InvalidASCII(b) {
return true return true
} }
} }
@@ -492,12 +548,26 @@ func (enc *Encoder) encodeQuotedString(multiline bool, b []byte, v string) []byt
del = 0x7f del = 0x7f
) )
for _, r := range []byte(v) { bv := []byte(v)
for i := 0; i < len(bv); i++ {
r := bv[i]
switch r { switch r {
case '\\': case '\\':
b = append(b, `\\`...) b = append(b, `\\`...)
case '"': case '"':
b = append(b, `\"`...) if multiline {
// Quotation marks do not need to be quoted in multiline strings unless
// it contains 3 consecutive. If 3+ quotes appear, quote all of them
// because it's visually better
if i+2 > len(bv) || bv[i+1] != '"' || bv[i+2] != '"' {
b = append(b, r)
} else {
b = append(b, `\"\"\"`...)
i += 2
}
} else {
b = append(b, `\"`...)
}
case '\b': case '\b':
b = append(b, `\b`...) b = append(b, `\b`...)
case '\f': case '\f':
@@ -534,9 +604,9 @@ func (enc *Encoder) encodeUnquotedKey(b []byte, v string) []byte {
return append(b, v...) return append(b, v...)
} }
func (enc *Encoder) encodeTableHeader(ctx encoderCtx, b []byte) ([]byte, error) { func (enc *Encoder) encodeTableHeader(ctx encoderCtx, b []byte) []byte {
if len(ctx.parentKey) == 0 { if len(ctx.parentKey) == 0 {
return b, nil return b
} }
b = enc.encodeComment(ctx.indent, ctx.options.comment, b) b = enc.encodeComment(ctx.indent, ctx.options.comment, b)
@@ -556,10 +626,9 @@ func (enc *Encoder) encodeTableHeader(ctx encoderCtx, b []byte) ([]byte, error)
b = append(b, "]\n"...) b = append(b, "]\n"...)
return b, nil return b
} }
//nolint:cyclop
func (enc *Encoder) encodeKey(b []byte, k string) []byte { func (enc *Encoder) encodeKey(b []byte, k string) []byte {
needsQuotation := false needsQuotation := false
cannotUseLiteral := false cannotUseLiteral := false
@@ -596,18 +665,33 @@ func (enc *Encoder) encodeKey(b []byte, k string) []byte {
func (enc *Encoder) keyToString(k reflect.Value) (string, error) { func (enc *Encoder) keyToString(k reflect.Value) (string, error) {
keyType := k.Type() keyType := k.Type()
switch { if keyType.Implements(textMarshalerType) {
case keyType.Kind() == reflect.String:
return k.String(), nil
case keyType.Implements(textMarshalerType):
keyB, err := k.Interface().(encoding.TextMarshaler).MarshalText() keyB, err := k.Interface().(encoding.TextMarshaler).MarshalText()
if err != nil { if err != nil {
return "", fmt.Errorf("toml: error marshalling key %v from text: %w", k, err) return "", fmt.Errorf("toml: error marshalling key %v from text: %w", k, err)
} }
return string(keyB), nil return string(keyB), nil
} }
return "", fmt.Errorf("toml: type %s is not supported as a map key", keyType.Kind())
switch keyType.Kind() {
case reflect.String:
return k.String(), nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return strconv.FormatInt(k.Int(), 10), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return strconv.FormatUint(k.Uint(), 10), nil
case reflect.Float32:
return strconv.FormatFloat(k.Float(), 'f', -1, 32), nil
case reflect.Float64:
return strconv.FormatFloat(k.Float(), 'f', -1, 64), nil
default:
return "", fmt.Errorf("toml: type %s is not supported as a map key", keyType.Kind())
}
} }
func (enc *Encoder) encodeMap(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, error) { func (enc *Encoder) encodeMap(b []byte, ctx encoderCtx, v reflect.Value) ([]byte, error) {
@@ -621,7 +705,14 @@ func (enc *Encoder) encodeMap(b []byte, ctx encoderCtx, v reflect.Value) ([]byte
v := iter.Value() v := iter.Value()
if isNil(v) { if isNil(v) {
continue // For nil pointers, convert to zero value of the element type.
// This allows round-trip marshaling of maps with nil pointer values.
// For nil interfaces and nil maps, skip since we can't derive a type.
if v.Kind() == reflect.Ptr {
v = reflect.Zero(v.Type().Elem())
} else {
continue
}
} }
k, err := enc.keyToString(iter.Key()) k, err := enc.keyToString(iter.Key())
@@ -643,8 +734,8 @@ func (enc *Encoder) encodeMap(b []byte, ctx encoderCtx, v reflect.Value) ([]byte
} }
func sortEntriesByKey(e []entry) { func sortEntriesByKey(e []entry) {
sort.Slice(e, func(i, j int) bool { slices.SortFunc(e, func(a, b entry) int {
return e[i].Key < e[j].Key return strings.Compare(a.Key, b.Key)
}) })
} }
@@ -707,11 +798,12 @@ func walkStruct(ctx encoderCtx, t *table, v reflect.Value) {
if fieldType.Anonymous { if fieldType.Anonymous {
if fieldType.Type.Kind() == reflect.Struct { if fieldType.Type.Kind() == reflect.Struct {
walkStruct(ctx, t, f) walkStruct(ctx, t, f)
} else if fieldType.Type.Kind() == reflect.Ptr && !f.IsNil() && f.Elem().Kind() == reflect.Struct {
walkStruct(ctx, t, f.Elem())
} }
continue continue
} else {
k = fieldType.Name
} }
k = fieldType.Name
} }
if isNil(f) { if isNil(f) {
@@ -721,6 +813,7 @@ func walkStruct(ctx encoderCtx, t *table, v reflect.Value) {
options := valueOptions{ options := valueOptions{
multiline: opts.multiline, multiline: opts.multiline,
omitempty: opts.omitempty, omitempty: opts.omitempty,
omitzero: opts.omitzero,
commented: opts.commented, commented: opts.commented,
comment: fieldType.Tag.Get("comment"), comment: fieldType.Tag.Get("comment"),
} }
@@ -781,6 +874,7 @@ type tagOptions struct {
multiline bool multiline bool
inline bool inline bool
omitempty bool omitempty bool
omitzero bool
commented bool commented bool
} }
@@ -793,7 +887,7 @@ func parseTag(tag string) (string, tagOptions) {
} }
raw := tag[idx+1:] raw := tag[idx+1:]
tag = string(tag[:idx]) tag = tag[:idx]
for raw != "" { for raw != "" {
var o string var o string
i := strings.Index(raw, ",") i := strings.Index(raw, ",")
@@ -809,6 +903,8 @@ func parseTag(tag string) (string, tagOptions) {
opts.inline = true opts.inline = true
case "omitempty": case "omitempty":
opts.omitempty = true opts.omitempty = true
case "omitzero":
opts.omitzero = true
case "commented": case "commented":
opts.commented = true opts.commented = true
} }
@@ -827,10 +923,7 @@ func (enc *Encoder) encodeTable(b []byte, ctx encoderCtx, t table) ([]byte, erro
} }
if !ctx.skipTableHeader { if !ctx.skipTableHeader {
b, err = enc.encodeTableHeader(ctx, b) b = enc.encodeTableHeader(ctx, b)
if err != nil {
return nil, err
}
if enc.indentTables && len(ctx.parentKey) > 0 { if enc.indentTables && len(ctx.parentKey) > 0 {
ctx.indent++ ctx.indent++
@@ -843,6 +936,9 @@ func (enc *Encoder) encodeTable(b []byte, ctx encoderCtx, t table) ([]byte, erro
if shouldOmitEmpty(kv.Options, kv.Value) { if shouldOmitEmpty(kv.Options, kv.Value) {
continue continue
} }
if shouldOmitZero(kv.Options, kv.Value) {
continue
}
hasNonEmptyKV = true hasNonEmptyKV = true
ctx.setKey(kv.Key) ctx.setKey(kv.Key)
@@ -862,6 +958,9 @@ func (enc *Encoder) encodeTable(b []byte, ctx encoderCtx, t table) ([]byte, erro
if shouldOmitEmpty(table.Options, table.Value) { if shouldOmitEmpty(table.Options, table.Value) {
continue continue
} }
if shouldOmitZero(table.Options, table.Value) {
continue
}
if first { if first {
first = false first = false
if hasNonEmptyKV { if hasNonEmptyKV {
@@ -896,6 +995,9 @@ func (enc *Encoder) encodeTableInline(b []byte, ctx encoderCtx, t table) ([]byte
if shouldOmitEmpty(kv.Options, kv.Value) { if shouldOmitEmpty(kv.Options, kv.Value) {
continue continue
} }
if shouldOmitZero(kv.Options, kv.Value) {
continue
}
if first { if first {
first = false first = false
@@ -924,11 +1026,14 @@ func willConvertToTable(ctx encoderCtx, v reflect.Value) bool {
if !v.IsValid() { if !v.IsValid() {
return false return false
} }
if v.Type() == timeType || v.Type().Implements(textMarshalerType) || (v.Kind() != reflect.Ptr && v.CanAddr() && reflect.PtrTo(v.Type()).Implements(textMarshalerType)) { t := v.Type()
if t == timeType || t.Implements(textMarshalerType) {
return false
}
if v.Kind() != reflect.Ptr && v.CanAddr() && reflect.PointerTo(t).Implements(textMarshalerType) {
return false return false
} }
t := v.Type()
switch t.Kind() { switch t.Kind() {
case reflect.Map, reflect.Struct: case reflect.Map, reflect.Struct:
return !ctx.inline return !ctx.inline
@@ -998,6 +1103,10 @@ func (enc *Encoder) encodeSliceAsArrayTable(b []byte, ctx encoderCtx, v reflect.
scratch = enc.commented(ctx.commented, scratch) scratch = enc.commented(ctx.commented, scratch)
if enc.indentTables {
scratch = enc.indent(ctx.indent, scratch)
}
scratch = append(scratch, "[["...) scratch = append(scratch, "[["...)
for i, k := range ctx.parentKey { for i, k := range ctx.parentKey {
+697 -84
View File
File diff suppressed because it is too large Load Diff
+2 -3
View File
@@ -1,6 +1,4 @@
//go:build go1.18 || go1.19 || go1.20 || go1.21 // Package ossfuzz provides a fuzzing target for OSS-Fuzz.
// +build go1.18 go1.19 go1.20 go1.21
package ossfuzz package ossfuzz
import ( import (
@@ -11,6 +9,7 @@ import (
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
) )
// FuzzToml is the fuzzing target.
func FuzzToml(data []byte) int { func FuzzToml(data []byte) int {
if len(data) >= 2048 { if len(data) >= 2048 {
return 0 return 0
+15 -8
View File
@@ -1,7 +1,6 @@
package toml package toml
import ( import (
"github.com/pelletier/go-toml/v2/internal/danger"
"github.com/pelletier/go-toml/v2/internal/tracker" "github.com/pelletier/go-toml/v2/internal/tracker"
"github.com/pelletier/go-toml/v2/unstable" "github.com/pelletier/go-toml/v2/unstable"
) )
@@ -13,6 +12,9 @@ type strict struct {
key tracker.KeyTracker key tracker.KeyTracker
missing []unstable.ParserError missing []unstable.ParserError
// Reference to the document for computing key ranges.
doc []byte
} }
func (s *strict) EnterTable(node *unstable.Node) { func (s *strict) EnterTable(node *unstable.Node) {
@@ -53,7 +55,7 @@ func (s *strict) MissingTable(node *unstable.Node) {
} }
s.missing = append(s.missing, unstable.ParserError{ s.missing = append(s.missing, unstable.ParserError{
Highlight: keyLocation(node), Highlight: s.keyLocation(node),
Message: "missing table", Message: "missing table",
Key: s.key.Key(), Key: s.key.Key(),
}) })
@@ -65,7 +67,7 @@ func (s *strict) MissingField(node *unstable.Node) {
} }
s.missing = append(s.missing, unstable.ParserError{ s.missing = append(s.missing, unstable.ParserError{
Highlight: keyLocation(node), Highlight: s.keyLocation(node),
Message: "missing field", Message: "missing field",
Key: s.key.Key(), Key: s.key.Key(),
}) })
@@ -88,7 +90,7 @@ func (s *strict) Error(doc []byte) error {
return err return err
} }
func keyLocation(node *unstable.Node) []byte { func (s *strict) keyLocation(node *unstable.Node) []byte {
k := node.Key() k := node.Key()
hasOne := k.Next() hasOne := k.Next()
@@ -96,12 +98,17 @@ func keyLocation(node *unstable.Node) []byte {
panic("should not be called with empty key") panic("should not be called with empty key")
} }
start := k.Node().Data // Get the range from the first key to the last key.
end := k.Node().Data firstRaw := k.Node().Raw
lastRaw := firstRaw
for k.Next() { for k.Next() {
end = k.Node().Data lastRaw = k.Node().Raw
} }
return danger.BytesRange(start, end) // Compute the slice from the document using the ranges.
start := firstRaw.Offset
end := lastRaw.Offset + lastRaw.Length
return s.doc[start:end]
} }
+596
View File
@@ -0,0 +1,596 @@
#!/usr/bin/env bash
set -uo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Go versions to test (1.11 through 1.25)
GO_VERSIONS=(
"1.11"
"1.12"
"1.13"
"1.14"
"1.15"
"1.16"
"1.17"
"1.18"
"1.19"
"1.20"
"1.21"
"1.22"
"1.23"
"1.24"
"1.25"
)
# Default values
PARALLEL=true
VERBOSE=false
OUTPUT_DIR="test-results"
DOCKER_TIMEOUT="10m"
usage() {
cat << EOF
Usage: $0 [OPTIONS] [GO_VERSIONS...]
Test go-toml across multiple Go versions using Docker containers.
The script reports the lowest continuous supported Go version (where all subsequent
versions pass) and only exits with non-zero status if either of the two most recent
Go versions fail, indicating immediate attention is needed.
Note: For Go versions < 1.21, the script automatically updates go.mod to match the
target version, but older versions may still fail due to missing standard library
features (e.g., the 'slices' package introduced in Go 1.21).
OPTIONS:
-h, --help Show this help message
-s, --sequential Run tests sequentially instead of in parallel
-v, --verbose Enable verbose output
-o, --output DIR Output directory for test results (default: test-results)
-t, --timeout TIME Docker timeout for each test (default: 10m)
--list List available Go versions and exit
ARGUMENTS:
GO_VERSIONS Specific Go versions to test (default: all supported versions)
Examples: 1.21 1.22 1.23
EXAMPLES:
$0 # Test all Go versions in parallel
$0 --sequential # Test all Go versions sequentially
$0 1.21 1.22 1.23 # Test specific versions
$0 --verbose --output ./results 1.24 1.25 # Verbose output to custom directory
EXIT CODES:
0 Recent Go versions pass (good compatibility)
1 Recent Go versions fail (needs attention) or script error
EOF
}
log() {
echo -e "${BLUE}[$(date +'%H:%M:%S')]${NC} $*" >&2
}
log_success() {
echo -e "${GREEN}[$(date +'%H:%M:%S')] ✓${NC} $*" >&2
}
log_error() {
echo -e "${RED}[$(date +'%H:%M:%S')] ✗${NC} $*" >&2
}
log_warning() {
echo -e "${YELLOW}[$(date +'%H:%M:%S')] ⚠${NC} $*" >&2
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
usage
exit 0
;;
-s|--sequential)
PARALLEL=false
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-o|--output)
OUTPUT_DIR="$2"
shift 2
;;
-t|--timeout)
DOCKER_TIMEOUT="$2"
shift 2
;;
--list)
echo "Available Go versions:"
printf '%s\n' "${GO_VERSIONS[@]}"
exit 0
;;
-*)
echo "Unknown option: $1" >&2
usage
exit 1
;;
*)
# Remaining arguments are Go versions
break
;;
esac
done
# If specific versions provided, use those instead of defaults
if [[ $# -gt 0 ]]; then
GO_VERSIONS=("$@")
fi
# Validate Go versions
for version in "${GO_VERSIONS[@]}"; do
if ! [[ "$version" =~ ^1\.(1[1-9]|2[0-5])$ ]]; then
log_error "Invalid Go version: $version. Supported versions: 1.11-1.25"
exit 1
fi
done
# Check if Docker is available
if ! command -v docker &> /dev/null; then
log_error "Docker is required but not installed or not in PATH"
exit 1
fi
# Check if Docker daemon is running
if ! docker info &> /dev/null; then
log_error "Docker daemon is not running"
exit 1
fi
# Create output directory
mkdir -p "$OUTPUT_DIR"
# Function to test a single Go version
test_go_version() {
local go_version="$1"
local container_name="go-toml-test-${go_version}"
local result_file="${OUTPUT_DIR}/go-${go_version}.txt"
local dockerfile_content
log "Testing Go $go_version..."
# Create a temporary Dockerfile for this version
# For Go versions < 1.21, we need to update go.mod to match the Go version
local needs_go_mod_update=false
if [[ $(echo "$go_version 1.21" | tr ' ' '\n' | sort -V | head -n1) == "$go_version" && "$go_version" != "1.21" ]]; then
needs_go_mod_update=true
fi
dockerfile_content="FROM golang:${go_version}-alpine
# Install git (required for go mod)
RUN apk add --no-cache git
# Set working directory
WORKDIR /app
# Copy source code
COPY . ."
# Add go.mod update step for older Go versions
if [[ "$needs_go_mod_update" == true ]]; then
dockerfile_content="$dockerfile_content
# Update go.mod to match Go version (required for Go < 1.21)
RUN if [ -f go.mod ]; then sed -i 's/^go [0-9]\\+\\.[0-9]\\+\\(\\.[0-9]\\+\\)\\?/go $go_version/' go.mod; fi
# Note: Go versions < 1.21 may fail due to missing standard library packages (e.g., slices)
# This is expected for projects that use Go 1.21+ features"
fi
dockerfile_content="$dockerfile_content
# Run tests
CMD [\"sh\", \"-c\", \"go version && echo '--- Running go test ./... ---' && go test ./...\"]"
# Create temporary directory for this test
local temp_dir
temp_dir=$(mktemp -d)
# Copy source to temp directory (excluding test results and git)
rsync -a --exclude="$OUTPUT_DIR" --exclude=".git" --exclude="*.test" . "$temp_dir/"
# Create Dockerfile in temp directory
echo "$dockerfile_content" > "$temp_dir/Dockerfile"
# Build and run container
local exit_code=0
local output
if $VERBOSE; then
log "Building Docker image for Go $go_version..."
fi
# Capture both stdout and stderr, and the exit code
if output=$(cd "$temp_dir" && timeout "$DOCKER_TIMEOUT" docker build -t "$container_name" . 2>&1 && \
timeout "$DOCKER_TIMEOUT" docker run --rm "$container_name" 2>&1); then
log_success "Go $go_version: PASSED"
echo "PASSED" > "${result_file}.status"
else
exit_code=$?
log_error "Go $go_version: FAILED (exit code: $exit_code)"
echo "FAILED" > "${result_file}.status"
fi
# Save full output
echo "$output" > "$result_file"
# Clean up
docker rmi "$container_name" &> /dev/null || true
rm -rf "$temp_dir"
if $VERBOSE; then
echo "--- Go $go_version output ---"
echo "$output"
echo "--- End Go $go_version output ---"
fi
return $exit_code
}
# Function to run tests in parallel
run_parallel() {
local pids=()
local failed_versions=()
log "Starting parallel tests for ${#GO_VERSIONS[@]} Go versions..."
# Start all tests in background
for version in "${GO_VERSIONS[@]}"; do
test_go_version "$version" &
pids+=($!)
done
# Wait for all tests to complete
for i in "${!pids[@]}"; do
local pid=${pids[$i]}
local version=${GO_VERSIONS[$i]}
if ! wait $pid; then
failed_versions+=("$version")
fi
done
return ${#failed_versions[@]}
}
# Function to run tests sequentially
run_sequential() {
local failed_versions=()
log "Starting sequential tests for ${#GO_VERSIONS[@]} Go versions..."
for version in "${GO_VERSIONS[@]}"; do
if ! test_go_version "$version"; then
failed_versions+=("$version")
fi
done
return ${#failed_versions[@]}
}
# Main execution
main() {
local start_time
start_time=$(date +%s)
log "Starting Go version compatibility tests..."
log "Testing versions: ${GO_VERSIONS[*]}"
log "Output directory: $OUTPUT_DIR"
log "Parallel execution: $PARALLEL"
local failed_count
if $PARALLEL; then
run_parallel
failed_count=$?
else
run_sequential
failed_count=$?
fi
local end_time
end_time=$(date +%s)
local duration=$((end_time - start_time))
# Collect results for display
local passed_versions=()
local failed_versions=()
local unknown_versions=()
local passed_count=0
for version in "${GO_VERSIONS[@]}"; do
local status_file="${OUTPUT_DIR}/go-${version}.txt.status"
if [[ -f "$status_file" ]]; then
local status
status=$(cat "$status_file")
if [[ "$status" == "PASSED" ]]; then
passed_versions+=("$version")
((passed_count++))
else
failed_versions+=("$version")
fi
else
unknown_versions+=("$version")
fi
done
# Generate summary report
local summary_file="${OUTPUT_DIR}/summary.txt"
{
echo "Go Version Compatibility Test Summary"
echo "====================================="
echo "Date: $(date)"
echo "Duration: ${duration}s"
echo "Parallel: $PARALLEL"
echo ""
echo "Results:"
for version in "${GO_VERSIONS[@]}"; do
local status_file="${OUTPUT_DIR}/go-${version}.txt.status"
if [[ -f "$status_file" ]]; then
local status
status=$(cat "$status_file")
if [[ "$status" == "PASSED" ]]; then
echo " Go $version: ✓ PASSED"
else
echo " Go $version: ✗ FAILED"
fi
else
echo " Go $version: ? UNKNOWN (no status file)"
fi
done
echo ""
echo "Summary: $passed_count/${#GO_VERSIONS[@]} versions passed"
if [[ $failed_count -gt 0 ]]; then
echo ""
echo "Failed versions details:"
for version in "${failed_versions[@]}"; do
echo ""
echo "--- Go $version (FAILED) ---"
local result_file="${OUTPUT_DIR}/go-${version}.txt"
if [[ -f "$result_file" ]]; then
tail -n 30 "$result_file"
fi
done
fi
} > "$summary_file"
# Find lowest continuous supported version and check recent versions
local lowest_continuous_version=""
local recent_versions_failed=false
# Sort versions to ensure proper order
local sorted_versions=()
for version in "${GO_VERSIONS[@]}"; do
sorted_versions+=("$version")
done
# Sort versions numerically (1.11, 1.12, ..., 1.25)
IFS=$'\n' sorted_versions=($(sort -V <<< "${sorted_versions[*]}"))
# Find lowest continuous supported version (all versions from this point onwards pass)
for version in "${sorted_versions[@]}"; do
local status_file="${OUTPUT_DIR}/go-${version}.txt.status"
local all_subsequent_pass=true
# Check if this version and all subsequent versions pass
local found_current=false
for check_version in "${sorted_versions[@]}"; do
if [[ "$check_version" == "$version" ]]; then
found_current=true
fi
if [[ "$found_current" == true ]]; then
local check_status_file="${OUTPUT_DIR}/go-${check_version}.txt.status"
if [[ -f "$check_status_file" ]]; then
local status
status=$(cat "$check_status_file")
if [[ "$status" != "PASSED" ]]; then
all_subsequent_pass=false
break
fi
else
all_subsequent_pass=false
break
fi
fi
done
if [[ "$all_subsequent_pass" == true ]]; then
lowest_continuous_version="$version"
break
fi
done
# Check if the two most recent versions failed
local num_versions=${#sorted_versions[@]}
if [[ $num_versions -ge 2 ]]; then
local second_recent="${sorted_versions[$((num_versions-2))]}"
local most_recent="${sorted_versions[$((num_versions-1))]}"
local second_recent_status_file="${OUTPUT_DIR}/go-${second_recent}.txt.status"
local most_recent_status_file="${OUTPUT_DIR}/go-${most_recent}.txt.status"
local second_recent_failed=false
local most_recent_failed=false
if [[ -f "$second_recent_status_file" ]]; then
local status
status=$(cat "$second_recent_status_file")
if [[ "$status" != "PASSED" ]]; then
second_recent_failed=true
fi
else
second_recent_failed=true
fi
if [[ -f "$most_recent_status_file" ]]; then
local status
status=$(cat "$most_recent_status_file")
if [[ "$status" != "PASSED" ]]; then
most_recent_failed=true
fi
else
most_recent_failed=true
fi
if [[ "$second_recent_failed" == true || "$most_recent_failed" == true ]]; then
recent_versions_failed=true
fi
elif [[ $num_versions -eq 1 ]]; then
# Only one version tested, check if it's the most recent and failed
local only_version="${sorted_versions[0]}"
local only_status_file="${OUTPUT_DIR}/go-${only_version}.txt.status"
if [[ -f "$only_status_file" ]]; then
local status
status=$(cat "$only_status_file")
if [[ "$status" != "PASSED" ]]; then
recent_versions_failed=true
fi
else
recent_versions_failed=true
fi
fi
# Display summary
echo ""
log "Test completed in ${duration}s"
log "Summary report: $summary_file"
echo ""
echo "========================================"
echo " FINAL RESULTS"
echo "========================================"
echo ""
# Display passed versions
if [[ ${#passed_versions[@]} -gt 0 ]]; then
log_success "PASSED (${#passed_versions[@]}/${#GO_VERSIONS[@]}):"
# Sort passed versions for display
local sorted_passed=()
for version in "${sorted_versions[@]}"; do
for passed_version in "${passed_versions[@]}"; do
if [[ "$version" == "$passed_version" ]]; then
sorted_passed+=("$version")
break
fi
done
done
for version in "${sorted_passed[@]}"; do
echo -e " ${GREEN}${NC} Go $version"
done
echo ""
fi
# Display failed versions
if [[ ${#failed_versions[@]} -gt 0 ]]; then
log_error "FAILED (${#failed_versions[@]}/${#GO_VERSIONS[@]}):"
# Sort failed versions for display
local sorted_failed=()
for version in "${sorted_versions[@]}"; do
for failed_version in "${failed_versions[@]}"; do
if [[ "$version" == "$failed_version" ]]; then
sorted_failed+=("$version")
break
fi
done
done
for version in "${sorted_failed[@]}"; do
echo -e " ${RED}${NC} Go $version"
done
echo ""
# Show failure details
echo "========================================"
echo " FAILURE DETAILS"
echo "========================================"
echo ""
for version in "${sorted_failed[@]}"; do
echo -e "${RED}--- Go $version FAILURE LOGS (last 30 lines) ---${NC}"
local result_file="${OUTPUT_DIR}/go-${version}.txt"
if [[ -f "$result_file" ]]; then
tail -n 30 "$result_file" | sed 's/^/ /'
else
echo " No log file found: $result_file"
fi
echo ""
done
fi
# Display unknown versions
if [[ ${#unknown_versions[@]} -gt 0 ]]; then
log_warning "UNKNOWN (${#unknown_versions[@]}/${#GO_VERSIONS[@]}):"
for version in "${unknown_versions[@]}"; do
echo -e " ${YELLOW}?${NC} Go $version (no status file)"
done
echo ""
fi
echo "========================================"
echo " COMPATIBILITY SUMMARY"
echo "========================================"
echo ""
if [[ -n "$lowest_continuous_version" ]]; then
log_success "Lowest continuous supported version: Go $lowest_continuous_version"
echo " (All versions from Go $lowest_continuous_version onwards pass)"
else
log_error "No continuous version support found"
echo " (No version has all subsequent versions passing)"
fi
echo ""
echo "========================================"
echo "Full detailed logs available in: $OUTPUT_DIR"
echo "========================================"
# Determine exit code based on recent versions
if [[ "$recent_versions_failed" == true ]]; then
log_error "OVERALL RESULT: Recent Go versions failed - this needs attention!"
if [[ -n "$lowest_continuous_version" ]]; then
echo "Note: Continuous support starts from Go $lowest_continuous_version"
fi
exit 1
else
log_success "OVERALL RESULT: Recent Go versions pass - compatibility looks good!"
if [[ -n "$lowest_continuous_version" ]]; then
echo "Continuous support starts from Go $lowest_continuous_version"
fi
exit 0
fi
}
# Trap to clean up on exit
cleanup() {
# Kill any remaining background processes
jobs -p | xargs -r kill 2>/dev/null || true
# Clean up any remaining Docker containers
docker ps -q --filter "name=go-toml-test-" | xargs -r docker stop 2>/dev/null || true
docker images -q --filter "reference=go-toml-test-*" | xargs -r docker rmi 2>/dev/null || true
}
trap cleanup EXIT
# Run main function
main
+10 -8
View File
@@ -1,15 +1,16 @@
//go:generate go run ./cmd/tomltestgen/main.go -o toml_testgen_test.go //go:generate go run github.com/toml-lang/toml-test/cmd/toml-test@v1.6.0 -copy ./tests
//go:generate go run ./cmd/tomltestgen/main.go -r v1.6.0 -o toml_testgen_test.go
// This is a support file for toml_testgen_test.go
package toml_test package toml_test
import ( import (
"encoding/json" "encoding/json"
"errors"
"testing" "testing"
"github.com/pelletier/go-toml/v2" "github.com/pelletier/go-toml/v2"
"github.com/pelletier/go-toml/v2/internal/assert"
"github.com/pelletier/go-toml/v2/internal/testsuite" "github.com/pelletier/go-toml/v2/internal/testsuite"
"github.com/stretchr/testify/require"
) )
func testgenInvalid(t *testing.T, input string) { func testgenInvalid(t *testing.T, input string) {
@@ -38,21 +39,22 @@ func testgenValid(t *testing.T, input string, jsonRef string) {
err := testsuite.Unmarshal([]byte(input), &doc) err := testsuite.Unmarshal([]byte(input), &doc)
if err != nil { if err != nil {
if de, ok := err.(*toml.DecodeError); ok { de := &toml.DecodeError{}
if errors.As(err, &de) {
t.Logf("%s\n%s", err, de) t.Logf("%s\n%s", err, de)
} }
t.Fatalf("failed parsing toml: %s", err) t.Fatalf("failed parsing toml: %s", err)
} }
j, err := testsuite.ValueToTaggedJSON(doc) j, err := testsuite.ValueToTaggedJSON(doc)
require.NoError(t, err) assert.NoError(t, err)
var ref interface{} var ref interface{}
err = json.Unmarshal([]byte(jsonRef), &ref) err = json.Unmarshal([]byte(jsonRef), &ref)
require.NoError(t, err) assert.NoError(t, err)
var actual interface{} var actual interface{}
err = json.Unmarshal([]byte(j), &actual) err = json.Unmarshal(j, &actual)
require.NoError(t, err) assert.NoError(t, err)
testsuite.CmpJSON(t, "", ref, actual) testsuite.CmpJSON(t, "", ref, actual)
} }
+1634 -433
View File
File diff suppressed because it is too large Load Diff
+15 -6
View File
@@ -6,9 +6,18 @@ import (
"time" "time"
) )
var timeType = reflect.TypeOf((*time.Time)(nil)).Elem() // isZeroer is used to check if a type has a custom IsZero method.
var textMarshalerType = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem() // This allows custom types to define their own zero-value semantics.
var textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem() type isZeroer interface {
var mapStringInterfaceType = reflect.TypeOf(map[string]interface{}(nil)) IsZero() bool
var sliceInterfaceType = reflect.TypeOf([]interface{}(nil)) }
var stringType = reflect.TypeOf("")
var (
timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
textMarshalerType = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
isZeroerType = reflect.TypeOf((*isZeroer)(nil)).Elem()
mapStringInterfaceType = reflect.TypeOf(map[string]interface{}(nil))
sliceInterfaceType = reflect.TypeOf([]interface{}(nil))
stringType = reflect.TypeOf("")
)
+254 -60
View File
@@ -5,14 +5,13 @@ import (
"errors" "errors"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"math" "math"
"reflect" "reflect"
"strconv"
"strings" "strings"
"sync/atomic" "sync/atomic"
"time" "time"
"github.com/pelletier/go-toml/v2/internal/danger"
"github.com/pelletier/go-toml/v2/internal/tracker" "github.com/pelletier/go-toml/v2/internal/tracker"
"github.com/pelletier/go-toml/v2/unstable" "github.com/pelletier/go-toml/v2/unstable"
) )
@@ -21,10 +20,8 @@ import (
// //
// It is a shortcut for Decoder.Decode() with the default options. // It is a shortcut for Decoder.Decode() with the default options.
func Unmarshal(data []byte, v interface{}) error { func Unmarshal(data []byte, v interface{}) error {
p := unstable.Parser{} d := decoder{}
p.Reset(data) d.p.Reset(data)
d := decoder{p: &p}
return d.FromParser(v) return d.FromParser(v)
} }
@@ -35,6 +32,9 @@ type Decoder struct {
// global settings // global settings
strict bool strict bool
// toggles unmarshaler interface
unmarshalerInterface bool
} }
// NewDecoder creates a new Decoder that will read from r. // NewDecoder creates a new Decoder that will read from r.
@@ -54,6 +54,29 @@ func (d *Decoder) DisallowUnknownFields() *Decoder {
return d return d
} }
// EnableUnmarshalerInterface allows to enable unmarshaler interface.
//
// With this feature enabled, types implementing the unstable.Unmarshaler
// interface can be decoded from any structure of the document. It allows types
// that don't have a straightforward TOML representation to provide their own
// decoding logic.
//
// The UnmarshalTOML method receives raw TOML bytes:
// - For single values: the raw value bytes (e.g., `"hello"` for a string)
// - For tables: all key-value lines belonging to that table
// - For inline tables/arrays: the raw bytes of the inline structure
//
// The unstable.RawMessage type can be used to capture raw TOML bytes for
// later processing, similar to json.RawMessage.
//
// *Unstable:* This method does not follow the compatibility guarantees of
// semver. It can be changed or removed without a new major version being
// issued.
func (d *Decoder) EnableUnmarshalerInterface() *Decoder {
d.unmarshalerInterface = true
return d
}
// Decode the whole content of r into v. // Decode the whole content of r into v.
// //
// By default, values in the document that don't exist in the target Go value // By default, values in the document that don't exist in the target Go value
@@ -96,26 +119,26 @@ func (d *Decoder) DisallowUnknownFields() *Decoder {
// Inline Table -> same as Table // Inline Table -> same as Table
// Array of Tables -> same as Array and Table // Array of Tables -> same as Array and Table
func (d *Decoder) Decode(v interface{}) error { func (d *Decoder) Decode(v interface{}) error {
b, err := ioutil.ReadAll(d.r) b, err := io.ReadAll(d.r)
if err != nil { if err != nil {
return fmt.Errorf("toml: %w", err) return fmt.Errorf("toml: %w", err)
} }
p := unstable.Parser{}
p.Reset(b)
dec := decoder{ dec := decoder{
p: &p,
strict: strict{ strict: strict{
Enabled: d.strict, Enabled: d.strict,
doc: b,
}, },
unmarshalerInterface: d.unmarshalerInterface,
} }
dec.p.Reset(b)
return dec.FromParser(v) return dec.FromParser(v)
} }
type decoder struct { type decoder struct {
// Which parser instance in use for this decoding session. // Which parser instance in use for this decoding session.
p *unstable.Parser p unstable.Parser
// Flag indicating that the current expression is stashed. // Flag indicating that the current expression is stashed.
// If set to true, calling nextExpr will not actually pull a new expression // If set to true, calling nextExpr will not actually pull a new expression
@@ -127,6 +150,10 @@ type decoder struct {
// need to be skipped. // need to be skipped.
skipUntilTable bool skipUntilTable bool
// Flag indicating that the current array/slice table should be cleared because
// it is the first encounter of an array table.
clearArrayTable bool
// Tracks position in Go arrays. // Tracks position in Go arrays.
// This is used when decoding [[array tables]] into Go arrays. Given array // This is used when decoding [[array tables]] into Go arrays. Given array
// tables are separate TOML expression, we need to keep track of where we // tables are separate TOML expression, we need to keep track of where we
@@ -139,6 +166,9 @@ type decoder struct {
// Strict mode // Strict mode
strict strict strict strict
// Flag that enables/disables unmarshaler interface.
unmarshalerInterface bool
// Current context for the error. // Current context for the error.
errorContext *errorContext errorContext *errorContext
} }
@@ -201,7 +231,7 @@ func (d *decoder) FromParser(v interface{}) error {
} }
if r.IsNil() { if r.IsNil() {
return fmt.Errorf("toml: decoding pointer target cannot be nil") return errors.New("toml: decoding pointer target cannot be nil")
} }
r = r.Elem() r = r.Elem()
@@ -246,9 +276,10 @@ Rules for the unmarshal code:
func (d *decoder) handleRootExpression(expr *unstable.Node, v reflect.Value) error { func (d *decoder) handleRootExpression(expr *unstable.Node, v reflect.Value) error {
var x reflect.Value var x reflect.Value
var err error var err error
var first bool // used for to clear array tables on first use
if !(d.skipUntilTable && expr.Kind == unstable.KeyValue) { if !d.skipUntilTable || expr.Kind != unstable.KeyValue {
err = d.seen.CheckExpression(expr) first, err = d.seen.CheckExpression(expr)
if err != nil { if err != nil {
return err return err
} }
@@ -267,6 +298,7 @@ func (d *decoder) handleRootExpression(expr *unstable.Node, v reflect.Value) err
case unstable.ArrayTable: case unstable.ArrayTable:
d.skipUntilTable = false d.skipUntilTable = false
d.strict.EnterArrayTable(expr) d.strict.EnterArrayTable(expr)
d.clearArrayTable = first
x, err = d.handleArrayTable(expr.Key(), v) x, err = d.handleArrayTable(expr.Key(), v)
default: default:
panic(fmt.Errorf("parser should not permit expression of kind %s at document root", expr.Kind)) panic(fmt.Errorf("parser should not permit expression of kind %s at document root", expr.Kind))
@@ -307,6 +339,10 @@ func (d *decoder) handleArrayTableCollectionLast(key unstable.Iterator, v reflec
reflect.Copy(nelem, elem) reflect.Copy(nelem, elem)
elem = nelem elem = nelem
} }
if d.clearArrayTable && elem.Len() > 0 {
elem.SetLen(0)
d.clearArrayTable = false
}
} }
return d.handleArrayTableCollectionLast(key, elem) return d.handleArrayTableCollectionLast(key, elem)
case reflect.Ptr: case reflect.Ptr:
@@ -325,6 +361,10 @@ func (d *decoder) handleArrayTableCollectionLast(key unstable.Iterator, v reflec
return v, nil return v, nil
case reflect.Slice: case reflect.Slice:
if d.clearArrayTable && v.Len() > 0 {
v.SetLen(0)
d.clearArrayTable = false
}
elemType := v.Type().Elem() elemType := v.Type().Elem()
var elem reflect.Value var elem reflect.Value
if elemType.Kind() == reflect.Interface { if elemType.Kind() == reflect.Interface {
@@ -343,7 +383,7 @@ func (d *decoder) handleArrayTableCollectionLast(key unstable.Iterator, v reflec
case reflect.Array: case reflect.Array:
idx := d.arrayIndex(true, v) idx := d.arrayIndex(true, v)
if idx >= v.Len() { if idx >= v.Len() {
return v, fmt.Errorf("%s at position %d", d.typeMismatchError("array table", v.Type()), idx) return v, fmt.Errorf("%w at position %d", d.typeMismatchError("array table", v.Type()), idx)
} }
elem := v.Index(idx) elem := v.Index(idx)
_, err := d.handleArrayTable(key, elem) _, err := d.handleArrayTable(key, elem)
@@ -381,27 +421,51 @@ func (d *decoder) handleArrayTableCollection(key unstable.Iterator, v reflect.Va
return v, nil return v, nil
case reflect.Slice: case reflect.Slice:
elem := v.Index(v.Len() - 1) // Create a new element when the slice is empty; otherwise operate on
// the last element.
var (
elem reflect.Value
created bool
)
if v.Len() == 0 {
created = true
elemType := v.Type().Elem()
if elemType.Kind() == reflect.Interface {
elem = makeMapStringInterface()
} else {
elem = reflect.New(elemType).Elem()
}
} else {
elem = v.Index(v.Len() - 1)
}
x, err := d.handleArrayTable(key, elem) x, err := d.handleArrayTable(key, elem)
if err != nil || d.skipUntilTable { if err != nil || d.skipUntilTable {
return reflect.Value{}, err return reflect.Value{}, err
} }
if x.IsValid() { if x.IsValid() {
elem.Set(x) if created {
elem = x
} else {
elem.Set(x)
}
} }
if created {
return reflect.Append(v, elem), nil
}
return v, err return v, err
case reflect.Array: case reflect.Array:
idx := d.arrayIndex(false, v) idx := d.arrayIndex(false, v)
if idx >= v.Len() { if idx >= v.Len() {
return v, fmt.Errorf("%s at position %d", d.typeMismatchError("array table", v.Type()), idx) return v, fmt.Errorf("%w at position %d", d.typeMismatchError("array table", v.Type()), idx)
} }
elem := v.Index(idx) elem := v.Index(idx)
_, err := d.handleArrayTable(key, elem) _, err := d.handleArrayTable(key, elem)
return v, err return v, err
default:
return d.handleArrayTable(key, v)
} }
return d.handleArrayTable(key, v)
} }
func (d *decoder) handleKeyPart(key unstable.Iterator, v reflect.Value, nextFn handlerFn, makeFn valueMakerFn) (reflect.Value, error) { func (d *decoder) handleKeyPart(key unstable.Iterator, v reflect.Value, nextFn handlerFn, makeFn valueMakerFn) (reflect.Value, error) {
@@ -435,7 +499,8 @@ func (d *decoder) handleKeyPart(key unstable.Iterator, v reflect.Value, nextFn h
mv := v.MapIndex(mk) mv := v.MapIndex(mk)
set := false set := false
if !mv.IsValid() { switch {
case !mv.IsValid():
// If there is no value in the map, create a new one according to // If there is no value in the map, create a new one according to
// the map type. If the element type is interface, create either a // the map type. If the element type is interface, create either a
// map[string]interface{} or a []interface{} depending on whether // map[string]interface{} or a []interface{} depending on whether
@@ -448,13 +513,13 @@ func (d *decoder) handleKeyPart(key unstable.Iterator, v reflect.Value, nextFn h
mv = reflect.New(t).Elem() mv = reflect.New(t).Elem()
} }
set = true set = true
} else if mv.Kind() == reflect.Interface { case mv.Kind() == reflect.Interface:
mv = mv.Elem() mv = mv.Elem()
if !mv.IsValid() { if !mv.IsValid() {
mv = makeFn() mv = makeFn()
} }
set = true set = true
} else if !mv.CanAddr() { case !mv.CanAddr():
vt := v.Type() vt := v.Type()
t := vt.Elem() t := vt.Elem()
oldmv := mv oldmv := mv
@@ -539,18 +604,28 @@ func (d *decoder) handleArrayTablePart(key unstable.Iterator, v reflect.Value) (
// cannot handle it. // cannot handle it.
func (d *decoder) handleTable(key unstable.Iterator, v reflect.Value) (reflect.Value, error) { func (d *decoder) handleTable(key unstable.Iterator, v reflect.Value) (reflect.Value, error) {
if v.Kind() == reflect.Slice { if v.Kind() == reflect.Slice {
if v.Len() == 0 { // For non-empty slices, work with the last element
return reflect.Value{}, unstable.NewParserError(key.Node().Data, "cannot store a table in a slice") if v.Len() > 0 {
elem := v.Index(v.Len() - 1)
x, err := d.handleTable(key, elem)
if err != nil {
return reflect.Value{}, err
}
if x.IsValid() {
elem.Set(x)
}
return reflect.Value{}, nil
} }
elem := v.Index(v.Len() - 1) // Empty slice - check if it implements Unmarshaler (e.g., RawMessage)
x, err := d.handleTable(key, elem) // and we're at the end of the key path
if err != nil { if d.unmarshalerInterface && !key.Next() {
return reflect.Value{}, err if v.CanAddr() && v.Addr().CanInterface() {
if outi, ok := v.Addr().Interface().(unstable.Unmarshaler); ok {
return d.handleKeyValuesUnmarshaler(outi)
}
}
} }
if x.IsValid() { return reflect.Value{}, unstable.NewParserError(key.Node().Data, "cannot store a table in a slice")
elem.Set(x)
}
return reflect.Value{}, nil
} }
if key.Next() { if key.Next() {
// Still scoping the key // Still scoping the key
@@ -564,6 +639,24 @@ func (d *decoder) handleTable(key unstable.Iterator, v reflect.Value) (reflect.V
// Handle root expressions until the end of the document or the next // Handle root expressions until the end of the document or the next
// non-key-value. // non-key-value.
func (d *decoder) handleKeyValues(v reflect.Value) (reflect.Value, error) { func (d *decoder) handleKeyValues(v reflect.Value) (reflect.Value, error) {
// Check if target implements Unmarshaler before processing key-values.
// This allows types to handle entire tables themselves.
if d.unmarshalerInterface {
vv := v
for vv.Kind() == reflect.Ptr {
if vv.IsNil() {
vv.Set(reflect.New(vv.Type().Elem()))
}
vv = vv.Elem()
}
if vv.CanAddr() && vv.Addr().CanInterface() {
if outi, ok := vv.Addr().Interface().(unstable.Unmarshaler); ok {
// Collect all key-value expressions for this table
return d.handleKeyValuesUnmarshaler(outi)
}
}
}
var rv reflect.Value var rv reflect.Value
for d.nextExpr() { for d.nextExpr() {
expr := d.expr() expr := d.expr()
@@ -576,7 +669,7 @@ func (d *decoder) handleKeyValues(v reflect.Value) (reflect.Value, error) {
break break
} }
err := d.seen.CheckExpression(expr) _, err := d.seen.CheckExpression(expr)
if err != nil { if err != nil {
return reflect.Value{}, err return reflect.Value{}, err
} }
@@ -593,6 +686,41 @@ func (d *decoder) handleKeyValues(v reflect.Value) (reflect.Value, error) {
return rv, nil return rv, nil
} }
// handleKeyValuesUnmarshaler collects all key-value expressions for a table
// and passes them to the Unmarshaler as raw TOML bytes.
func (d *decoder) handleKeyValuesUnmarshaler(u unstable.Unmarshaler) (reflect.Value, error) {
// Collect raw bytes from all key-value expressions for this table.
// We use the Raw field on each KeyValue expression to preserve the
// original formatting (whitespace, quoting style, etc.) from the document.
var buf []byte
for d.nextExpr() {
expr := d.expr()
if expr.Kind != unstable.KeyValue {
d.stashExpr()
break
}
_, err := d.seen.CheckExpression(expr)
if err != nil {
return reflect.Value{}, err
}
// Use the raw bytes from the original document to preserve formatting
if expr.Raw.Length > 0 {
raw := d.p.Raw(expr.Raw)
buf = append(buf, raw...)
}
buf = append(buf, '\n')
}
if err := u.UnmarshalTOML(buf); err != nil {
return reflect.Value{}, err
}
return reflect.Value{}, nil
}
type ( type (
handlerFn func(key unstable.Iterator, v reflect.Value) (reflect.Value, error) handlerFn func(key unstable.Iterator, v reflect.Value) (reflect.Value, error)
valueMakerFn func() reflect.Value valueMakerFn func() reflect.Value
@@ -634,9 +762,24 @@ func (d *decoder) handleValue(value *unstable.Node, v reflect.Value) error {
v = initAndDereferencePointer(v) v = initAndDereferencePointer(v)
} }
ok, err := d.tryTextUnmarshaler(value, v) if d.unmarshalerInterface {
if ok || err != nil { if v.CanAddr() && v.Addr().CanInterface() {
return err if outi, ok := v.Addr().Interface().(unstable.Unmarshaler); ok {
// Pass raw bytes from the original document
return outi.UnmarshalTOML(d.p.Raw(value.Raw))
}
}
}
// Only try TextUnmarshaler for scalar types. For Array and InlineTable,
// fall through to struct/map unmarshaling to allow flexible unmarshaling
// where a type can implement UnmarshalText for string values but still
// be populated field-by-field from a table. See issue #974.
if value.Kind != unstable.Array && value.Kind != unstable.InlineTable {
ok, err := d.tryTextUnmarshaler(value, v)
if ok || err != nil {
return err
}
} }
switch value.Kind { switch value.Kind {
@@ -778,6 +921,9 @@ func (d *decoder) unmarshalDateTime(value *unstable.Node, v reflect.Value) error
return err return err
} }
if v.Kind() != reflect.Interface && v.Type() != timeType {
return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("datetime", v.Type()))
}
v.Set(reflect.ValueOf(dt)) v.Set(reflect.ValueOf(dt))
return nil return nil
} }
@@ -788,14 +934,14 @@ func (d *decoder) unmarshalLocalDate(value *unstable.Node, v reflect.Value) erro
return err return err
} }
if v.Kind() != reflect.Interface && v.Type() != timeType {
return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("local date", v.Type()))
}
if v.Type() == timeType { if v.Type() == timeType {
cast := ld.AsTime(time.Local) v.Set(reflect.ValueOf(ld.AsTime(time.Local)))
v.Set(reflect.ValueOf(cast))
return nil return nil
} }
v.Set(reflect.ValueOf(ld)) v.Set(reflect.ValueOf(ld))
return nil return nil
} }
@@ -809,6 +955,9 @@ func (d *decoder) unmarshalLocalTime(value *unstable.Node, v reflect.Value) erro
return unstable.NewParserError(rest, "extra characters at the end of a local time") return unstable.NewParserError(rest, "extra characters at the end of a local time")
} }
if v.Kind() != reflect.Interface {
return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("local time", v.Type()))
}
v.Set(reflect.ValueOf(lt)) v.Set(reflect.ValueOf(lt))
return nil return nil
} }
@@ -823,15 +972,14 @@ func (d *decoder) unmarshalLocalDateTime(value *unstable.Node, v reflect.Value)
return unstable.NewParserError(rest, "extra characters at the end of a local date time") return unstable.NewParserError(rest, "extra characters at the end of a local date time")
} }
if v.Kind() != reflect.Interface && v.Type() != timeType {
return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("local datetime", v.Type()))
}
if v.Type() == timeType { if v.Type() == timeType {
cast := ldt.AsTime(time.Local) v.Set(reflect.ValueOf(ldt.AsTime(time.Local)))
v.Set(reflect.ValueOf(cast))
return nil return nil
} }
v.Set(reflect.ValueOf(ldt)) v.Set(reflect.ValueOf(ldt))
return nil return nil
} }
@@ -886,8 +1034,9 @@ const (
// compile time, so it is computed during initialization. // compile time, so it is computed during initialization.
var maxUint int64 = math.MaxInt64 var maxUint int64 = math.MaxInt64
func init() { func init() { //nolint:gochecknoinits
m := uint64(^uint(0)) m := uint64(^uint(0))
// #nosec G115
if m < uint64(maxUint) { if m < uint64(maxUint) {
maxUint = int64(m) maxUint = int64(m)
} }
@@ -967,7 +1116,7 @@ func (d *decoder) unmarshalInteger(value *unstable.Node, v reflect.Value) error
case reflect.Interface: case reflect.Interface:
r = reflect.ValueOf(i) r = reflect.ValueOf(i)
default: default:
return unstable.NewParserError(d.p.Raw(value.Raw), d.typeMismatchString("integer", v.Type())) return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("integer", v.Type()))
} }
if !r.Type().AssignableTo(v.Type()) { if !r.Type().AssignableTo(v.Type()) {
@@ -986,7 +1135,7 @@ func (d *decoder) unmarshalString(value *unstable.Node, v reflect.Value) error {
case reflect.Interface: case reflect.Interface:
v.Set(reflect.ValueOf(string(value.Data))) v.Set(reflect.ValueOf(string(value.Data)))
default: default:
return unstable.NewParserError(d.p.Raw(value.Raw), d.typeMismatchString("string", v.Type())) return unstable.NewParserError(d.p.Raw(value.Raw), "%s", d.typeMismatchString("string", v.Type()))
} }
return nil return nil
@@ -1031,14 +1180,45 @@ func (d *decoder) keyFromData(keyType reflect.Type, data []byte) (reflect.Value,
} }
return mk, nil return mk, nil
case reflect.PtrTo(keyType).Implements(textUnmarshalerType): case reflect.PointerTo(keyType).Implements(textUnmarshalerType):
mk := reflect.New(keyType) mk := reflect.New(keyType)
if err := mk.Interface().(encoding.TextUnmarshaler).UnmarshalText(data); err != nil { if err := mk.Interface().(encoding.TextUnmarshaler).UnmarshalText(data); err != nil {
return reflect.Value{}, fmt.Errorf("toml: error unmarshalling key type %s from text: %w", stringType, err) return reflect.Value{}, fmt.Errorf("toml: error unmarshalling key type %s from text: %w", stringType, err)
} }
return mk.Elem(), nil return mk.Elem(), nil
} }
return reflect.Value{}, fmt.Errorf("toml: cannot convert map key of type %s to expected type %s", stringType, keyType)
switch keyType.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
key, err := strconv.ParseInt(string(data), 10, 64)
if err != nil {
return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from integer: %w", stringType, err)
}
return reflect.ValueOf(key).Convert(keyType), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
key, err := strconv.ParseUint(string(data), 10, 64)
if err != nil {
return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from unsigned integer: %w", stringType, err)
}
return reflect.ValueOf(key).Convert(keyType), nil
case reflect.Float32:
key, err := strconv.ParseFloat(string(data), 32)
if err != nil {
return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from float: %w", stringType, err)
}
return reflect.ValueOf(float32(key)), nil
case reflect.Float64:
key, err := strconv.ParseFloat(string(data), 64)
if err != nil {
return reflect.Value{}, fmt.Errorf("toml: error parsing key of type %s from float: %w", stringType, err)
}
return reflect.ValueOf(float64(key)), nil
default:
return reflect.Value{}, fmt.Errorf("toml: cannot convert map key of type %s to expected type %s", stringType, keyType)
}
} }
func (d *decoder) handleKeyValuePart(key unstable.Iterator, value *unstable.Node, v reflect.Value) (reflect.Value, error) { func (d *decoder) handleKeyValuePart(key unstable.Iterator, value *unstable.Node, v reflect.Value) (reflect.Value, error) {
@@ -1084,6 +1264,18 @@ func (d *decoder) handleKeyValuePart(key unstable.Iterator, value *unstable.Node
case reflect.Struct: case reflect.Struct:
path, found := structFieldPath(v, string(key.Node().Data)) path, found := structFieldPath(v, string(key.Node().Data))
if !found { if !found {
// If no matching struct field is found but the target implements the
// unstable.Unmarshaler interface (and it is enabled), delegate the
// decoding of this value to the custom unmarshaler.
if d.unmarshalerInterface {
if v.CanAddr() && v.Addr().CanInterface() {
if outi, ok := v.Addr().Interface().(unstable.Unmarshaler); ok {
// Pass raw bytes from the original document
return reflect.Value{}, outi.UnmarshalTOML(d.p.Raw(value.Raw))
}
}
}
// Otherwise, keep previous behavior and skip until the next table.
d.skipUntilTable = true d.skipUntilTable = true
break break
} }
@@ -1097,9 +1289,9 @@ func (d *decoder) handleKeyValuePart(key unstable.Iterator, value *unstable.Node
f := fieldByIndex(v, path) f := fieldByIndex(v, path)
if !f.CanSet() { if !f.CanAddr() {
// If the field is not settable, need to take a slower path and make a copy of // If the field is not addressable, need to take a slower path and
// the struct itself to a new location. // make a copy of the struct itself to a new location.
nvp := reflect.New(v.Type()) nvp := reflect.New(v.Type())
nvp.Elem().Set(v) nvp.Elem().Set(v)
v = nvp.Elem() v = nvp.Elem()
@@ -1189,13 +1381,13 @@ func fieldByIndex(v reflect.Value, path []int) reflect.Value {
type fieldPathsMap = map[string][]int type fieldPathsMap = map[string][]int
var globalFieldPathsCache atomic.Value // map[danger.TypeID]fieldPathsMap var globalFieldPathsCache atomic.Value // map[reflect.Type]fieldPathsMap
func structFieldPath(v reflect.Value, name string) ([]int, bool) { func structFieldPath(v reflect.Value, name string) ([]int, bool) {
t := v.Type() t := v.Type()
cache, _ := globalFieldPathsCache.Load().(map[danger.TypeID]fieldPathsMap) cache, _ := globalFieldPathsCache.Load().(map[reflect.Type]fieldPathsMap)
fieldPaths, ok := cache[danger.MakeTypeID(t)] fieldPaths, ok := cache[t]
if !ok { if !ok {
fieldPaths = map[string][]int{} fieldPaths = map[string][]int{}
@@ -1206,8 +1398,8 @@ func structFieldPath(v reflect.Value, name string) ([]int, bool) {
fieldPaths[strings.ToLower(name)] = path fieldPaths[strings.ToLower(name)] = path
}) })
newCache := make(map[danger.TypeID]fieldPathsMap, len(cache)+1) newCache := make(map[reflect.Type]fieldPathsMap, len(cache)+1)
newCache[danger.MakeTypeID(t)] = fieldPaths newCache[t] = fieldPaths
for k, v := range cache { for k, v := range cache {
newCache[k] = v newCache[k] = v
} }
@@ -1231,7 +1423,9 @@ func forEachField(t reflect.Type, path []int, do func(name string, path []int))
continue continue
} }
fieldPath := append(path, i) fieldPath := make([]int, 0, len(path)+1)
fieldPath = append(fieldPath, path...)
fieldPath = append(fieldPath, i)
fieldPath = fieldPath[:len(fieldPath):len(fieldPath)] fieldPath = fieldPath[:len(fieldPath):len(fieldPath)]
name := f.Tag.Get("toml") name := f.Tag.Get("toml")
+1231 -137
View File
File diff suppressed because it is too large Load Diff
+38 -29
View File
@@ -1,10 +1,8 @@
package unstable package unstable
import ( import (
"errors"
"fmt" "fmt"
"unsafe"
"github.com/pelletier/go-toml/v2/internal/danger"
) )
// Iterator over a sequence of nodes. // Iterator over a sequence of nodes.
@@ -19,30 +17,39 @@ import (
// // do something with n // // do something with n
// } // }
type Iterator struct { type Iterator struct {
nodes *[]Node
idx int32
started bool started bool
node *Node
} }
// Next moves the iterator forward and returns true if points to a // Next moves the iterator forward and returns true if points to a
// node, false otherwise. // node, false otherwise.
func (c *Iterator) Next() bool { func (c *Iterator) Next() bool {
if c.nodes == nil {
return false
}
if !c.started { if !c.started {
c.started = true c.started = true
} else if c.node.Valid() { } else if c.idx >= 0 {
c.node = c.node.Next() c.idx = (*c.nodes)[c.idx].next
} }
return c.node.Valid() return c.idx >= 0 && int(c.idx) < len(*c.nodes)
} }
// IsLast returns true if the current node of the iterator is the last // IsLast returns true if the current node of the iterator is the last
// one. Subsequent calls to Next() will return false. // one. Subsequent calls to Next() will return false.
func (c *Iterator) IsLast() bool { func (c *Iterator) IsLast() bool {
return c.node.next == 0 return c.nodes == nil || c.idx < 0 || (*c.nodes)[c.idx].next < 0
} }
// Node returns a pointer to the node pointed at by the iterator. // Node returns a pointer to the node pointed at by the iterator.
func (c *Iterator) Node() *Node { func (c *Iterator) Node() *Node {
return c.node if c.nodes == nil || c.idx < 0 {
return nil
}
n := &(*c.nodes)[c.idx]
n.nodes = c.nodes
return n
} }
// Node in a TOML expression AST. // Node in a TOML expression AST.
@@ -65,11 +72,12 @@ type Node struct {
Raw Range // Raw bytes from the input. Raw Range // Raw bytes from the input.
Data []byte // Node value (either allocated or referencing the input). Data []byte // Node value (either allocated or referencing the input).
// References to other nodes, as offsets in the backing array // Absolute indices into the backing nodes slice. -1 means none.
// from this node. References can go backward, so those can be next int32
// negative. child int32
next int // 0 if last element
child int // 0 if no child // Reference to the backing nodes slice for navigation.
nodes *[]Node
} }
// Range of bytes in the document. // Range of bytes in the document.
@@ -80,24 +88,24 @@ type Range struct {
// Next returns a pointer to the next node, or nil if there is no next node. // Next returns a pointer to the next node, or nil if there is no next node.
func (n *Node) Next() *Node { func (n *Node) Next() *Node {
if n.next == 0 { if n.next < 0 {
return nil return nil
} }
ptr := unsafe.Pointer(n) next := &(*n.nodes)[n.next]
size := unsafe.Sizeof(Node{}) next.nodes = n.nodes
return (*Node)(danger.Stride(ptr, size, n.next)) return next
} }
// Child returns a pointer to the first child node of this node. Other children // Child returns a pointer to the first child node of this node. Other children
// can be accessed calling Next on the first child. Returns an nil if this Node // can be accessed calling Next on the first child. Returns nil if this Node
// has no child. // has no child.
func (n *Node) Child() *Node { func (n *Node) Child() *Node {
if n.child == 0 { if n.child < 0 {
return nil return nil
} }
ptr := unsafe.Pointer(n) child := &(*n.nodes)[n.child]
size := unsafe.Sizeof(Node{}) child.nodes = n.nodes
return (*Node)(danger.Stride(ptr, size, n.child)) return child
} }
// Valid returns true if the node's kind is set (not to Invalid). // Valid returns true if the node's kind is set (not to Invalid).
@@ -111,13 +119,14 @@ func (n *Node) Valid() bool {
func (n *Node) Key() Iterator { func (n *Node) Key() Iterator {
switch n.Kind { switch n.Kind {
case KeyValue: case KeyValue:
value := n.Child() child := n.child
if !value.Valid() { if child < 0 {
panic(fmt.Errorf("KeyValue should have at least two children")) panic(errors.New("KeyValue should have at least two children"))
} }
return Iterator{node: value.Next()} valueNode := &(*n.nodes)[child]
return Iterator{nodes: n.nodes, idx: valueNode.next}
case Table, ArrayTable: case Table, ArrayTable:
return Iterator{node: n.Child()} return Iterator{nodes: n.nodes, idx: n.child}
default: default:
panic(fmt.Errorf("Key() is not supported on a %s", n.Kind)) panic(fmt.Errorf("Key() is not supported on a %s", n.Kind))
} }
@@ -132,5 +141,5 @@ func (n *Node) Value() *Node {
// Children returns an iterator over a node's children. // Children returns an iterator over a node's children.
func (n *Node) Children() Iterator { func (n *Node) Children() Iterator {
return Iterator{node: n.Child()} return Iterator{nodes: n.nodes, idx: n.child}
} }
+16 -14
View File
@@ -5,12 +5,14 @@ import (
"testing" "testing"
) )
var valid10Ascii = []byte("1234567890") var (
var valid10Utf8 = []byte("日本語a") valid10ASCII = []byte("1234567890")
var valid1kUtf8 = bytes.Repeat([]byte("0123456789日本語日本語日本語日abcdefghijklmnopqrstuvwx"), 16) valid10Utf8 = []byte("日本語a")
var valid1MUtf8 = bytes.Repeat(valid1kUtf8, 1024) valid1kUtf8 = bytes.Repeat([]byte("0123456789日本語日本語日本語日abcdefghijklmnopqrstuvwx"), 16)
var valid1kAscii = bytes.Repeat([]byte("012345678998jhjklasDJKLAAdjdfjsdklfjdslkabcdefghijklmnopqrstuvwx"), 16) valid1MUtf8 = bytes.Repeat(valid1kUtf8, 1024)
var valid1MAscii = bytes.Repeat(valid1kAscii, 1024) valid1kASCII = bytes.Repeat([]byte("012345678998jhjklasDJKLAAdjdfjsdklfjdslkabcdefghijklmnopqrstuvwx"), 16)
valid1MASCII = bytes.Repeat(valid1kASCII, 1024)
)
func BenchmarkScanComments(b *testing.B) { func BenchmarkScanComments(b *testing.B) {
wrap := func(x []byte) []byte { wrap := func(x []byte) []byte {
@@ -18,9 +20,9 @@ func BenchmarkScanComments(b *testing.B) {
} }
inputs := map[string][]byte{ inputs := map[string][]byte{
"10Valid": wrap(valid10Ascii), "10Valid": wrap(valid10ASCII),
"1kValid": wrap(valid1kAscii), "1kValid": wrap(valid1kASCII),
"1MValid": wrap(valid1MAscii), "1MValid": wrap(valid1MASCII),
"10ValidUtf8": wrap(valid10Utf8), "10ValidUtf8": wrap(valid10Utf8),
"1kValidUtf8": wrap(valid1kUtf8), "1kValidUtf8": wrap(valid1kUtf8),
"1MValidUtf8": wrap(valid1MUtf8), "1MValidUtf8": wrap(valid1MUtf8),
@@ -33,7 +35,7 @@ func BenchmarkScanComments(b *testing.B) {
b.ResetTimer() b.ResetTimer()
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
scanComment(input) _, _, _ = scanComment(input)
} }
}) })
} }
@@ -45,9 +47,9 @@ func BenchmarkParseLiteralStringValid(b *testing.B) {
} }
inputs := map[string][]byte{ inputs := map[string][]byte{
"10Valid": wrap(valid10Ascii), "10Valid": wrap(valid10ASCII),
"1kValid": wrap(valid1kAscii), "1kValid": wrap(valid1kASCII),
"1MValid": wrap(valid1MAscii), "1MValid": wrap(valid1MASCII),
"10ValidUtf8": wrap(valid10Utf8), "10ValidUtf8": wrap(valid10Utf8),
"1kValidUtf8": wrap(valid1kUtf8), "1kValidUtf8": wrap(valid1kUtf8),
"1MValidUtf8": wrap(valid1MUtf8), "1MValidUtf8": wrap(valid1MUtf8),
@@ -63,7 +65,7 @@ func BenchmarkParseLiteralStringValid(b *testing.B) {
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
_, _, _, err := p.parseLiteralString(input) _, _, _, err := p.parseLiteralString(input)
if err != nil { if err != nil {
panic(err) b.Error(err)
} }
} }
}) })
+10 -17
View File
@@ -7,15 +7,6 @@ type root struct {
nodes []Node nodes []Node
} }
// Iterator over the top level nodes.
func (r *root) Iterator() Iterator {
it := Iterator{}
if len(r.nodes) > 0 {
it.node = &r.nodes[0]
}
return it
}
func (r *root) at(idx reference) *Node { func (r *root) at(idx reference) *Node {
return &r.nodes[idx] return &r.nodes[idx]
} }
@@ -33,12 +24,10 @@ type builder struct {
lastIdx int lastIdx int
} }
func (b *builder) Tree() *root {
return &b.tree
}
func (b *builder) NodeAt(ref reference) *Node { func (b *builder) NodeAt(ref reference) *Node {
return b.tree.at(ref) n := b.tree.at(ref)
n.nodes = &b.tree.nodes
return n
} }
func (b *builder) Reset() { func (b *builder) Reset() {
@@ -48,24 +37,28 @@ func (b *builder) Reset() {
func (b *builder) Push(n Node) reference { func (b *builder) Push(n Node) reference {
b.lastIdx = len(b.tree.nodes) b.lastIdx = len(b.tree.nodes)
n.next = -1
n.child = -1
b.tree.nodes = append(b.tree.nodes, n) b.tree.nodes = append(b.tree.nodes, n)
return reference(b.lastIdx) return reference(b.lastIdx)
} }
func (b *builder) PushAndChain(n Node) reference { func (b *builder) PushAndChain(n Node) reference {
newIdx := len(b.tree.nodes) newIdx := len(b.tree.nodes)
n.next = -1
n.child = -1
b.tree.nodes = append(b.tree.nodes, n) b.tree.nodes = append(b.tree.nodes, n)
if b.lastIdx >= 0 { if b.lastIdx >= 0 {
b.tree.nodes[b.lastIdx].next = newIdx - b.lastIdx b.tree.nodes[b.lastIdx].next = int32(newIdx) //nolint:gosec // TOML ASTs are small
} }
b.lastIdx = newIdx b.lastIdx = newIdx
return reference(b.lastIdx) return reference(b.lastIdx)
} }
func (b *builder) AttachChild(parent reference, child reference) { func (b *builder) AttachChild(parent reference, child reference) {
b.tree.nodes[parent].child = int(child) - int(parent) b.tree.nodes[parent].child = int32(child) //nolint:gosec // TOML ASTs are small
} }
func (b *builder) Chain(from reference, to reference) { func (b *builder) Chain(from reference, to reference) {
b.tree.nodes[from].next = int(to) - int(from) b.tree.nodes[from].next = int32(to) //nolint:gosec // TOML ASTs are small
} }
+16 -4
View File
@@ -6,28 +6,40 @@ import "fmt"
type Kind int type Kind int
const ( const (
// Meta // Invalid represents an invalid meta node.
Invalid Kind = iota Invalid Kind = iota
// Comment represents a comment meta node.
Comment Comment
// Key represents a key meta node.
Key Key
// Top level structures // Table represents a top-level table.
Table Table
// ArrayTable represents a top-level array table.
ArrayTable ArrayTable
// KeyValue represents a top-level key value.
KeyValue KeyValue
// Containers values // Array represents an array container value.
Array Array
// InlineTable represents an inline table container value.
InlineTable InlineTable
// Values // String represents a string value.
String String
// Bool represents a boolean value.
Bool Bool
// Float represents a floating point value.
Float Float
// Integer represents an integer value.
Integer Integer
// LocalDate represents a a local date value.
LocalDate LocalDate
// LocalTime represents a local time value.
LocalTime LocalTime
// LocalDateTime represents a local date/time value.
LocalDateTime LocalDateTime
// DateTime represents a data/time value.
DateTime DateTime
) )
+60 -40
View File
@@ -6,7 +6,6 @@ import (
"unicode" "unicode"
"github.com/pelletier/go-toml/v2/internal/characters" "github.com/pelletier/go-toml/v2/internal/characters"
"github.com/pelletier/go-toml/v2/internal/danger"
) )
// ParserError describes an error relative to the content of the document. // ParserError describes an error relative to the content of the document.
@@ -70,11 +69,26 @@ func (p *Parser) Data() []byte {
// panics. // panics.
func (p *Parser) Range(b []byte) Range { func (p *Parser) Range(b []byte) Range {
return Range{ return Range{
Offset: uint32(danger.SubsliceOffset(p.data, b)), Offset: uint32(p.subsliceOffset(b)), //nolint:gosec // TOML documents are small
Length: uint32(len(b)), Length: uint32(len(b)), //nolint:gosec // TOML documents are small
} }
} }
// rangeOfToken computes the Range of a token given the remaining bytes after the token.
// This is used when the token was extracted from the beginning of some position,
// and 'rest' is what remains after the token.
func (p *Parser) rangeOfToken(token, rest []byte) Range {
offset := len(p.data) - len(token) - len(rest)
return Range{Offset: uint32(offset), Length: uint32(len(token))} //nolint:gosec // TOML documents are small
}
// subsliceOffset returns the byte offset of subslice b within p.data.
// b must be a suffix (tail) of p.data.
func (p *Parser) subsliceOffset(b []byte) int {
// b is a suffix of p.data, so its offset is len(p.data) - len(b)
return len(p.data) - len(b)
}
// Raw returns the slice corresponding to the bytes in the given range. // Raw returns the slice corresponding to the bytes in the given range.
func (p *Parser) Raw(raw Range) []byte { func (p *Parser) Raw(raw Range) []byte {
return p.data[raw.Offset : raw.Offset+raw.Length] return p.data[raw.Offset : raw.Offset+raw.Length]
@@ -158,9 +172,17 @@ type Shape struct {
End Position End Position
} }
func (p *Parser) position(b []byte) Position { // Shape returns the shape of the given range in the input. Will
offset := danger.SubsliceOffset(p.data, b) // panic if the range is not a subslice of the input.
func (p *Parser) Shape(r Range) Shape {
return Shape{
Start: p.positionAt(int(r.Offset)),
End: p.positionAt(int(r.Offset + r.Length)),
}
}
// positionAt returns the position at the given byte offset in the document.
func (p *Parser) positionAt(offset int) Position {
lead := p.data[:offset] lead := p.data[:offset]
return Position{ return Position{
@@ -170,16 +192,6 @@ func (p *Parser) position(b []byte) Position {
} }
} }
// Shape returns the shape of the given range in the input. Will
// panic if the range is not a subslice of the input.
func (p *Parser) Shape(r Range) Shape {
raw := p.Raw(r)
return Shape{
Start: p.position(raw),
End: p.position(raw[r.Length:]),
}
}
func (p *Parser) parseNewline(b []byte) ([]byte, error) { func (p *Parser) parseNewline(b []byte) ([]byte, error) {
if b[0] == '\n' { if b[0] == '\n' {
return b[1:], nil return b[1:], nil
@@ -199,7 +211,7 @@ func (p *Parser) parseComment(b []byte) (reference, []byte, error) {
if p.KeepComments && err == nil { if p.KeepComments && err == nil {
ref = p.builder.Push(Node{ ref = p.builder.Push(Node{
Kind: Comment, Kind: Comment,
Raw: p.Range(data), Raw: p.rangeOfToken(data, rest),
Data: data, Data: data,
}) })
} }
@@ -316,6 +328,9 @@ func (p *Parser) parseStdTable(b []byte) (reference, []byte, error) {
func (p *Parser) parseKeyval(b []byte) (reference, []byte, error) { func (p *Parser) parseKeyval(b []byte) (reference, []byte, error) {
// keyval = key keyval-sep val // keyval = key keyval-sep val
// Track the start position for Raw range
startB := b
ref := p.builder.Push(Node{ ref := p.builder.Push(Node{
Kind: KeyValue, Kind: KeyValue,
}) })
@@ -348,6 +363,10 @@ func (p *Parser) parseKeyval(b []byte) (reference, []byte, error) {
p.builder.Chain(valRef, key) p.builder.Chain(valRef, key)
p.builder.AttachChild(ref, valRef) p.builder.AttachChild(ref, valRef)
// Set Raw to span the entire key-value expression
node := p.builder.NodeAt(ref)
node.Raw = p.rangeOfToken(startB[:len(startB)-len(b)], b)
return ref, b, err return ref, b, err
} }
@@ -376,7 +395,7 @@ func (p *Parser) parseVal(b []byte) (reference, []byte, error) {
if err == nil { if err == nil {
ref = p.builder.Push(Node{ ref = p.builder.Push(Node{
Kind: String, Kind: String,
Raw: p.Range(raw), Raw: p.rangeOfToken(raw, b),
Data: v, Data: v,
}) })
} }
@@ -394,7 +413,7 @@ func (p *Parser) parseVal(b []byte) (reference, []byte, error) {
if err == nil { if err == nil {
ref = p.builder.Push(Node{ ref = p.builder.Push(Node{
Kind: String, Kind: String,
Raw: p.Range(raw), Raw: p.rangeOfToken(raw, b),
Data: v, Data: v,
}) })
} }
@@ -456,7 +475,7 @@ func (p *Parser) parseInlineTable(b []byte) (reference, []byte, error) {
// inline-table-keyvals = keyval [ inline-table-sep inline-table-keyvals ] // inline-table-keyvals = keyval [ inline-table-sep inline-table-keyvals ]
parent := p.builder.Push(Node{ parent := p.builder.Push(Node{
Kind: InlineTable, Kind: InlineTable,
Raw: p.Range(b[:1]), Raw: p.rangeOfToken(b[:1], b[1:]),
}) })
first := true first := true
@@ -542,7 +561,7 @@ func (p *Parser) parseValArray(b []byte) (reference, []byte, error) {
var err error var err error
for len(b) > 0 { for len(b) > 0 {
cref := invalidReference var cref reference
cref, b, err = p.parseOptionalWhitespaceCommentNewline(b) cref, b, err = p.parseOptionalWhitespaceCommentNewline(b)
if err != nil { if err != nil {
return parent, nil, err return parent, nil, err
@@ -611,12 +630,13 @@ func (p *Parser) parseOptionalWhitespaceCommentNewline(b []byte) (reference, []b
latestCommentRef := invalidReference latestCommentRef := invalidReference
addComment := func(ref reference) { addComment := func(ref reference) {
if rootCommentRef == invalidReference { switch {
case rootCommentRef == invalidReference:
rootCommentRef = ref rootCommentRef = ref
} else if latestCommentRef == invalidReference { case latestCommentRef == invalidReference:
p.builder.AttachChild(rootCommentRef, ref) p.builder.AttachChild(rootCommentRef, ref)
latestCommentRef = ref latestCommentRef = ref
} else { default:
p.builder.Chain(latestCommentRef, ref) p.builder.Chain(latestCommentRef, ref)
latestCommentRef = ref latestCommentRef = ref
} }
@@ -704,11 +724,11 @@ func (p *Parser) parseMultilineBasicString(b []byte) ([]byte, []byte, []byte, er
if !escaped { if !escaped {
str := token[startIdx:endIdx] str := token[startIdx:endIdx]
verr := characters.Utf8TomlValidAlreadyEscaped(str) highlight := characters.Utf8TomlValidAlreadyEscaped(str)
if verr.Zero() { if len(highlight) == 0 {
return token, str, rest, nil return token, str, rest, nil
} }
return nil, nil, nil, NewParserError(str[verr.Index:verr.Index+verr.Size], "invalid UTF-8") return nil, nil, nil, NewParserError(highlight, "invalid UTF-8")
} }
var builder bytes.Buffer var builder bytes.Buffer
@@ -744,7 +764,7 @@ func (p *Parser) parseMultilineBasicString(b []byte) ([]byte, []byte, []byte, er
i += j i += j
for ; i < len(token)-3; i++ { for ; i < len(token)-3; i++ {
c := token[i] c := token[i]
if !(c == '\n' || c == '\r' || c == ' ' || c == '\t') { if c != '\n' && c != '\r' && c != ' ' && c != '\t' {
i-- i--
break break
} }
@@ -820,7 +840,7 @@ func (p *Parser) parseKey(b []byte) (reference, []byte, error) {
ref := p.builder.Push(Node{ ref := p.builder.Push(Node{
Kind: Key, Kind: Key,
Raw: p.Range(raw), Raw: p.rangeOfToken(raw, b),
Data: key, Data: key,
}) })
@@ -836,7 +856,7 @@ func (p *Parser) parseKey(b []byte) (reference, []byte, error) {
p.builder.PushAndChain(Node{ p.builder.PushAndChain(Node{
Kind: Key, Kind: Key,
Raw: p.Range(raw), Raw: p.rangeOfToken(raw, b),
Data: key, Data: key,
}) })
} else { } else {
@@ -897,11 +917,11 @@ func (p *Parser) parseBasicString(b []byte) ([]byte, []byte, []byte, error) {
// validate the string and return a direct reference to the buffer. // validate the string and return a direct reference to the buffer.
if !escaped { if !escaped {
str := token[startIdx:endIdx] str := token[startIdx:endIdx]
verr := characters.Utf8TomlValidAlreadyEscaped(str) highlight := characters.Utf8TomlValidAlreadyEscaped(str)
if verr.Zero() { if len(highlight) == 0 {
return token, str, rest, nil return token, str, rest, nil
} }
return nil, nil, nil, NewParserError(str[verr.Index:verr.Index+verr.Size], "invalid UTF-8") return nil, nil, nil, NewParserError(highlight, "invalid UTF-8")
} }
i := startIdx i := startIdx
@@ -972,7 +992,7 @@ func hexToRune(b []byte, length int) (rune, error) {
var r uint32 var r uint32
for i, c := range b { for i, c := range b {
d := uint32(0) var d uint32
switch { switch {
case '0' <= c && c <= '9': case '0' <= c && c <= '9':
d = uint32(c - '0') d = uint32(c - '0')
@@ -1013,7 +1033,7 @@ func (p *Parser) parseIntOrFloatOrDateTime(b []byte) (reference, []byte, error)
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Float, Kind: Float,
Data: b[:3], Data: b[:3],
Raw: p.Range(b[:3]), Raw: p.rangeOfToken(b[:3], b[3:]),
}), b[3:], nil }), b[3:], nil
case 'n': case 'n':
if !scanFollowsNan(b) { if !scanFollowsNan(b) {
@@ -1023,7 +1043,7 @@ func (p *Parser) parseIntOrFloatOrDateTime(b []byte) (reference, []byte, error)
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Float, Kind: Float,
Data: b[:3], Data: b[:3],
Raw: p.Range(b[:3]), Raw: p.rangeOfToken(b[:3], b[3:]),
}), b[3:], nil }), b[3:], nil
case '+', '-': case '+', '-':
return p.scanIntOrFloat(b) return p.scanIntOrFloat(b)
@@ -1076,7 +1096,7 @@ byteLoop:
} }
case c == 'T' || c == 't' || c == ':' || c == '.': case c == 'T' || c == 't' || c == ':' || c == '.':
hasTime = true hasTime = true
case c == '+' || c == '-' || c == 'Z' || c == 'z': case c == '+' || c == 'Z' || c == 'z':
hasTz = true hasTz = true
case c == ' ': case c == ' ':
if !seenSpace && i+1 < len(b) && isDigit(b[i+1]) { if !seenSpace && i+1 < len(b) && isDigit(b[i+1]) {
@@ -1148,7 +1168,7 @@ func (p *Parser) scanIntOrFloat(b []byte) (reference, []byte, error) {
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Integer, Kind: Integer,
Data: b[:i], Data: b[:i],
Raw: p.Range(b[:i]), Raw: p.rangeOfToken(b[:i], b[i:]),
}), b[i:], nil }), b[i:], nil
} }
@@ -1172,7 +1192,7 @@ func (p *Parser) scanIntOrFloat(b []byte) (reference, []byte, error) {
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Float, Kind: Float,
Data: b[:i+3], Data: b[:i+3],
Raw: p.Range(b[:i+3]), Raw: p.rangeOfToken(b[:i+3], b[i+3:]),
}), b[i+3:], nil }), b[i+3:], nil
} }
@@ -1184,7 +1204,7 @@ func (p *Parser) scanIntOrFloat(b []byte) (reference, []byte, error) {
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: Float, Kind: Float,
Data: b[:i+3], Data: b[:i+3],
Raw: p.Range(b[:i+3]), Raw: p.rangeOfToken(b[:i+3], b[i+3:]),
}), b[i+3:], nil }), b[i+3:], nil
} }
@@ -1207,7 +1227,7 @@ func (p *Parser) scanIntOrFloat(b []byte) (reference, []byte, error) {
return p.builder.Push(Node{ return p.builder.Push(Node{
Kind: kind, Kind: kind,
Data: b[:i], Data: b[:i],
Raw: p.Range(b[:i]), Raw: p.rangeOfToken(b[:i], b[i:]),
}), b[i:], nil }), b[i:], nil
} }
+86 -18
View File
@@ -6,7 +6,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/stretchr/testify/require" "github.com/pelletier/go-toml/v2/internal/assert"
) )
func TestParser_AST_Numbers(t *testing.T) { func TestParser_AST_Numbers(t *testing.T) {
@@ -141,9 +141,9 @@ func TestParser_AST_Numbers(t *testing.T) {
p.NextExpression() p.NextExpression()
err := p.Error() err := p.Error()
if e.err { if e.err {
require.Error(t, err) assert.Error(t, err)
} else { } else {
require.NoError(t, err) assert.NoError(t, err)
expected := astNode{ expected := astNode{
Kind: KeyValue, Kind: KeyValue,
@@ -168,8 +168,8 @@ type (
func compareNode(t *testing.T, e astNode, n *Node) { func compareNode(t *testing.T, e astNode, n *Node) {
t.Helper() t.Helper()
require.Equal(t, e.Kind, n.Kind) assert.Equal(t, e.Kind, n.Kind)
require.Equal(t, e.Data, n.Data) assert.Equal(t, e.Data, n.Data)
compareIterator(t, e.Children, n.Children()) compareIterator(t, e.Children, n.Children())
} }
@@ -341,9 +341,9 @@ func TestParser_AST(t *testing.T) {
p.NextExpression() p.NextExpression()
err := p.Error() err := p.Error()
if e.err { if e.err {
require.Error(t, err) assert.Error(t, err)
} else { } else {
require.NoError(t, err) assert.NoError(t, err)
compareNode(t, e.ast, p.Expression()) compareNode(t, e.ast, p.Expression())
} }
}) })
@@ -358,7 +358,7 @@ func BenchmarkParseBasicStringWithUnicode(b *testing.B) {
b.SetBytes(int64(len(input))) b.SetBytes(int64(len(input)))
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
p.parseBasicString(input) _, _, _, _ = p.parseBasicString(input)
} }
}) })
b.Run("8", func(b *testing.B) { b.Run("8", func(b *testing.B) {
@@ -367,7 +367,7 @@ func BenchmarkParseBasicStringWithUnicode(b *testing.B) {
b.SetBytes(int64(len(input))) b.SetBytes(int64(len(input)))
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
p.parseBasicString(input) _, _, _, _ = p.parseBasicString(input)
} }
}) })
} }
@@ -383,7 +383,7 @@ func BenchmarkParseBasicStringsEasy(b *testing.B) {
b.SetBytes(int64(len(input))) b.SetBytes(int64(len(input)))
for i := 0; i < b.N; i++ { for i := 0; i < b.N; i++ {
p.parseBasicString(input) _, _, _, _ = p.parseBasicString(input)
} }
}) })
} }
@@ -431,9 +431,9 @@ func TestParser_AST_DateTimes(t *testing.T) {
p.NextExpression() p.NextExpression()
err := p.Error() err := p.Error()
if e.err { if e.err {
require.Error(t, err) assert.Error(t, err)
} else { } else {
require.NoError(t, err) assert.NoError(t, err)
expected := astNode{ expected := astNode{
Kind: KeyValue, Kind: KeyValue,
@@ -539,7 +539,7 @@ key5 = [ # Next to start of inline array.
// --- // ---
// 6:1->6:22 (105->126) | Comment [# Above simple value.] // 6:1->6:22 (105->126) | Comment [# Above simple value.]
// --- // ---
// 1:1->1:1 (0->0) | KeyValue [] // 7:1->7:14 (127->140) | KeyValue []
// 7:7->7:14 (133->140) | String [value] // 7:7->7:14 (133->140) | String [value]
// 7:1->7:4 (127->130) | Key [key] // 7:1->7:4 (127->130) | Key [key]
// 7:15->7:38 (141->164) | Comment [# Next to simple value.] // 7:15->7:38 (141->164) | Comment [# Next to simple value.]
@@ -552,12 +552,12 @@ key5 = [ # Next to start of inline array.
// --- // ---
// 14:1->14:22 (252->273) | Comment [# Above inline table.] // 14:1->14:22 (252->273) | Comment [# Above inline table.]
// --- // ---
// 1:1->1:1 (0->0) | KeyValue [] // 15:1->15:50 (274->323) | KeyValue []
// 15:8->15:9 (281->282) | InlineTable [] // 15:8->15:9 (281->282) | InlineTable []
// 1:1->1:1 (0->0) | KeyValue [] // 15:10->15:23 (283->296) | KeyValue []
// 15:18->15:23 (291->296) | String [Tom] // 15:18->15:23 (291->296) | String [Tom]
// 15:10->15:15 (283->288) | Key [first] // 15:10->15:15 (283->288) | Key [first]
// 1:1->1:1 (0->0) | KeyValue [] // 15:25->15:48 (298->321) | KeyValue []
// 15:32->15:48 (305->321) | String [Preston-Werner] // 15:32->15:48 (305->321) | String [Preston-Werner]
// 15:25->15:29 (298->302) | Key [last] // 15:25->15:29 (298->302) | Key [last]
// 15:1->15:5 (274->278) | Key [name] // 15:1->15:5 (274->278) | Key [name]
@@ -567,7 +567,7 @@ key5 = [ # Next to start of inline array.
// --- // ---
// 18:1->18:15 (371->385) | Comment [# Above array.] // 18:1->18:15 (371->385) | Comment [# Above array.]
// --- // ---
// 1:1->1:1 (0->0) | KeyValue [] // 19:1->19:20 (386->405) | KeyValue []
// 1:1->1:1 (0->0) | Array [] // 1:1->1:1 (0->0) | Array []
// 19:11->19:12 (396->397) | Integer [1] // 19:11->19:12 (396->397) | Integer [1]
// 19:14->19:15 (399->400) | Integer [2] // 19:14->19:15 (399->400) | Integer [2]
@@ -579,7 +579,7 @@ key5 = [ # Next to start of inline array.
// --- // ---
// 22:1->22:26 (448->473) | Comment [# Above multi-line array.] // 22:1->22:26 (448->473) | Comment [# Above multi-line array.]
// --- // ---
// 1:1->1:1 (0->0) | KeyValue [] // 23:1->31:2 (474->694) | KeyValue []
// 1:1->1:1 (0->0) | Array [] // 1:1->1:1 (0->0) | Array []
// 23:10->23:42 (483->515) | Comment [# Next to start of inline array.] // 23:10->23:42 (483->515) | Comment [# Next to start of inline array.]
// 24:3->24:38 (518->553) | Comment [# Second line before array content.] // 24:3->24:38 (518->553) | Comment [# Second line before array content.]
@@ -605,6 +605,74 @@ key5 = [ # Next to start of inline array.
// 36:1->36:21 (804->824) | Comment [# After array table.] // 36:1->36:21 (804->824) | Comment [# After array table.]
} }
func TestIterator_IsLast(t *testing.T) {
// Test IsLast on an iterator with multiple elements using public Parser API
doc := `array = [1, 2, 3]`
p := Parser{}
p.Reset([]byte(doc))
p.NextExpression()
e := p.Expression()
arr := e.Value() // The array node
it := arr.Children()
count := 0
lastCount := 0
for it.Next() {
count++
if it.IsLast() {
lastCount++
}
}
assert.Equal(t, 3, count)
assert.Equal(t, 1, lastCount)
}
func TestNodeChaining(t *testing.T) {
// Test that sibling nodes are correctly chained via Next()
// This exercises the internal PushAndChain functionality through public APIs
doc := `a.b.c = 1`
p := Parser{}
p.Reset([]byte(doc))
p.NextExpression()
e := p.Expression()
// KeyValue has children: value, then key parts (a, b, c)
keyIt := e.Key()
// Collect all key parts by following the iterator
var keys []string
for keyIt.Next() {
keys = append(keys, string(keyIt.Node().Data))
}
assert.Equal(t, []string{"a", "b", "c"}, keys)
}
func TestMultipleExpressions(t *testing.T) {
// Test parsing multiple top-level expressions
// This exercises root iteration through public APIs
doc := `
key1 = "value1"
key2 = "value2"
key3 = "value3"
`
p := Parser{}
p.Reset([]byte(doc))
var keys []string
for p.NextExpression() {
e := p.Expression()
keyIt := e.Key()
keyIt.Next()
keys = append(keys, string(keyIt.Node().Data))
}
assert.NoError(t, p.Error())
assert.Equal(t, []string{"key1", "key2", "key3"}, keys)
}
func ExampleParser() { func ExampleParser() {
doc := ` doc := `
hello = "world" hello = "world"
+32
View File
@@ -0,0 +1,32 @@
package unstable
// Unmarshaler is implemented by types that can unmarshal a TOML
// description of themselves. The input is a valid TOML document
// containing the relevant portion of the parsed document.
//
// For tables (including split tables defined in multiple places),
// the data contains the raw key-value bytes from the original document
// with adjusted table headers to be relative to the unmarshaling target.
type Unmarshaler interface {
UnmarshalTOML(data []byte) error
}
// RawMessage is a raw encoded TOML value. It implements Unmarshaler
// and can be used to delay TOML decoding or capture raw content.
//
// Example usage:
//
// type Config struct {
// Plugin RawMessage `toml:"plugin"`
// }
//
// var cfg Config
// toml.NewDecoder(r).EnableUnmarshalerInterface().Decode(&cfg)
// // cfg.Plugin now contains the raw TOML bytes for [plugin]
type RawMessage []byte
// UnmarshalTOML implements Unmarshaler.
func (m *RawMessage) UnmarshalTOML(data []byte) error {
*m = append((*m)[0:0], data...)
return nil
}