Reduce marshal and unmarshal overhead (#1044)
* Reduce marshal and unmarshal overhead Targeted optimizations to reduce performance overhead introduced by recent feature additions and the unsafe removal. Unmarshal: - parseKeyval: access the node directly in the builder's slice to set Raw, bypassing NodeAt which triggers a GC write barrier for the nodes-pointer on every key-value expression. - Iterator.Next: cache the *nodes slice dereference in a local variable to avoid repeated pointer-to-slice indirection in the hot loop. Marshal: - Guard shouldOmitZero calls with an inlineable options.omitzero check. shouldOmitZero has inlining cost 1145 (budget 80), so avoiding the function call when omitzero is not set removes per-field overhead. - Inline the isNil check in encodeMap. isNil has inlining cost 93 (budget 80), so expanding it at the single hot call site avoids per-map-entry function call overhead. Update README benchmarks. Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
+7
-3
@@ -28,12 +28,16 @@ func (c *Iterator) Next() bool {
|
||||
if c.nodes == nil {
|
||||
return false
|
||||
}
|
||||
nodes := *c.nodes
|
||||
if !c.started {
|
||||
c.started = true
|
||||
} else if c.idx >= 0 {
|
||||
c.idx = (*c.nodes)[c.idx].next
|
||||
} else {
|
||||
idx := c.idx
|
||||
if idx >= 0 && int(idx) < len(nodes) {
|
||||
c.idx = nodes[idx].next
|
||||
}
|
||||
}
|
||||
return c.idx >= 0 && int(c.idx) < len(*c.nodes)
|
||||
return c.idx >= 0 && int(c.idx) < len(nodes)
|
||||
}
|
||||
|
||||
// IsLast returns true if the current node of the iterator is the last
|
||||
|
||||
Reference in New Issue
Block a user