{"id":8123,"date":"2026-02-11T10:38:42","date_gmt":"2026-02-11T09:38:42","guid":{"rendered":"https:\/\/launix.de\/launix\/?p=8123"},"modified":"2026-02-11T10:38:43","modified_gmt":"2026-02-11T09:38:43","slug":"doubling-memcp-parser-speed-heres-how","status":"publish","type":"post","link":"https:\/\/launix.de\/launix\/en\/doubling-memcp-parser-speed-heres-how\/","title":{"rendered":"Doubling MemCP Parser Speed &#8211; Here&#8217;s How"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Why Parser Speed Matters for OLTP<\/h2>\n\n\n\n<p>MemCP is an in-memory, MySQL-compatible database. It handles two kinds of workloads: analytics queries that scan millions of rows, and OLTP queries \u2014 small INSERTs, UPDATEs, DELETEs, and SELECTs coming in rapid fire from applications.<\/p>\n\n\n\n<p>For OLTP \/ analytics queries, parsing a 1000-character SELECT takes milliseconds against a query that runs for seconds or minutes. Nobody cares. For OLTP, every microsecond in the parser is a microsecond added to every request. A web application doing 10,000 INSERT\/s spends more time parsing SQL than executing it if the parser is slow.<\/p>\n\n\n\n<p>The queries that matter for parser performance are the ones that come in high volume: single-row inserts, point updates, key lookups. And bulk inserts \u2014 <code>INSERT INTO t VALUES (...)<\/code> and <code>SELECT a,b,c FROM t<\/code> with hundreds or thousands of rows \u2014 where the parser processes the same value-tuple grammar thousands of times in a single call.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Packrat Parsing and Why MemCP Uses It<\/h2>\n\n\n\n<p>MemCP&#8217;s SQL parser is built on <a href=\"https:\/\/github.com\/launix-de\/go-packrat\">go-packrat<\/a>, a packrat parser combinator library. A packrat parser works by combining small parsers (atoms, regex matchers) into larger ones through combinators: <code>And<\/code> (sequence), <code>Or<\/code> (alternatives), <code>Kleene<\/code> (repetition). The grammar for <code>INSERT INTO t VALUES (1,'a'), (2,'b')<\/code> is built from:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>insertStmt = And(INSERT, INTO, tableName, VALUES, tupleList)\ntupleList  = Many(tuple, comma)\ntuple      = And(lparen, valueList, rparen)\nvalueList  = Many(value, comma)\nvalue      = Or(integer, string, null, ...)<\/code><\/pre>\n\n\n\n<p>Packrat parsing guarantees O(n) time through memoization: every parser result at every input position is cached, so no work is done twice. This matters for grammars with backtracking \u2014 if <code>Or<\/code> tries alternative A and it fails at position 50, the partial results from positions 0-49 are cached and reused when alternative B is tried.<\/p>\n\n\n\n<p>MemCP uses packrat combinators instead of a generated parser (like yacc) for a specific reason: <strong>extensibility<\/strong>. MemCP&#8217;s Scheme runtime lets libraries define new SQL syntax by constructing parser trees at runtime:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>(define my-parser (sql-and (sql-atom \"CUSTOM\") (sql-atom \"KEYWORD\") sql-expr))\n(register-syntax \"custom_stmt\" my-parser)<\/code><\/pre>\n\n\n\n<p>A generated parser would require a build step and recompilation. Combinator parsers are data structures \u2014 they can be constructed, composed, and extended at runtime.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Problem<\/h2>\n\n\n\n<p>Profiling a bulk INSERT showed the parser responsible for tens of thousands of heap allocations per query. Each allocation is cheap individually, but at OLTP volumes they add up: GC pressure, cache misses, wasted cycles.<\/p>\n\n\n\n<p>We optimized go-packrat across five versions. The results were surprising \u2014 reducing allocations doesn&#8217;t always make things faster. In the end, choosing the right changes got us a 2x speedup.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Measurements<\/h2>\n\n\n\n<p>All measurements: <code>(time (parse_sql ...))<\/code> in MemCP, median of multiple runs. The input is <code>INSERT INTO t VALUES('a','b'), ...<\/code> with 3, 100, 1000, and 10000 rows.<\/p>\n\n\n\n<p><strong>Absolute times:<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Version<\/th><th>x3<\/th><th>x100<\/th><th>x1000<\/th><th>x10000<\/th><\/tr><\/thead><tbody><tr><td>v2.1.15 (baseline)<\/td><td>1143 us<\/td><td>14.6 ms<\/td><td>150.9 ms<\/td><td>1267.6 ms<\/td><\/tr><tr><td>v2.1.16<\/td><td>542 us<\/td><td>10.2 ms<\/td><td>78.9 ms<\/td><td>727.4 ms<\/td><\/tr><tr><td>v2.1.17<\/td><td>639 us<\/td><td>10.6 ms<\/td><td>99.7 ms<\/td><td>1013.2 ms<\/td><\/tr><tr><td>v2.1.18<\/td><td>978 us<\/td><td>22.7 ms<\/td><td>214.1 ms<\/td><td>2115.6 ms<\/td><\/tr><tr><td><strong>v2.1.19<\/strong><\/td><td><strong>577 us<\/strong><\/td><td><strong>9.0 ms<\/strong><\/td><td><strong>73.0 ms<\/strong><\/td><td><strong>675.8 ms<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Relative to v2.1.15 baseline:<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Version<\/th><th>x3<\/th><th>x100<\/th><th>x1000<\/th><th>x10000<\/th><\/tr><\/thead><tbody><tr><td>v2.1.15<\/td><td>1.0x<\/td><td>1.0x<\/td><td>1.0x<\/td><td>1.0x<\/td><\/tr><tr><td>v2.1.16<\/td><td>2.1x<\/td><td>1.4x<\/td><td>1.9x<\/td><td>1.7x<\/td><\/tr><tr><td>v2.1.17<\/td><td>1.8x<\/td><td>1.4x<\/td><td>1.5x<\/td><td>1.3x<\/td><\/tr><tr><td>v2.1.18<\/td><td>1.2x<\/td><td>0.6x<\/td><td>0.7x<\/td><td>0.6x<\/td><\/tr><tr><td><strong>v2.1.19<\/strong><\/td><td><strong>2.0x<\/strong><\/td><td><strong>1.6x<\/strong><\/td><td><strong>2.1x<\/strong><\/td><td><strong>1.9x<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Per-row cost<\/strong> (median \/ row count):<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Version<\/th><th>x3<\/th><th>x100<\/th><th>x1000<\/th><th>x10000<\/th><\/tr><\/thead><tbody><tr><td>v2.1.15<\/td><td>381 us<\/td><td>146 us<\/td><td>151 us<\/td><td>127 us<\/td><\/tr><tr><td>v2.1.16<\/td><td>181 us<\/td><td>102 us<\/td><td>79 us<\/td><td>73 us<\/td><\/tr><tr><td>v2.1.17<\/td><td>213 us<\/td><td>106 us<\/td><td>100 us<\/td><td>101 us<\/td><\/tr><tr><td>v2.1.18<\/td><td>326 us<\/td><td>227 us<\/td><td>214 us<\/td><td>212 us<\/td><\/tr><tr><td><strong>v2.1.19<\/strong><\/td><td><strong>192 us<\/strong><\/td><td><strong>90 us<\/strong><\/td><td><strong>73 us<\/strong><\/td><td><strong>68 us<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>v2.1.19 achieves the best per-row cost at scale: 68 us\/row at 10,000 rows, 7-11% faster than v2.1.16 and nearly 2x the baseline. Its per-row cost drops with scale (192 -&gt; 68 us), showing good amortization of fixed overhead.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What Changed \u2014 and What We Learned<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">v2.1.16: Replacing Expensive Operations (2x speedup)<\/h3>\n\n\n\n<p>Seven internal changes, all pure wins:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Keyword matching without regex.<\/strong> Every SQL keyword was matched via <code>(?i)^SELECT<\/code> with <code>FindStringSubmatch<\/code>. Replaced with <code>strings.EqualFold<\/code> for case-insensitive, <code>==<\/code> for case-sensitive. With 313 atom parsers in the grammar, this was the highest-impact single change.<\/li>\n\n\n\n<li><strong>Word boundaries as <code>[]bool<\/code> instead of <code>map[int]bool<\/code>.<\/strong> Flat slice indexed by position, one allocation instead of thousands of map buckets.<\/li>\n\n\n\n<li><strong>Whitespace skip fast-path.<\/strong> A byte check (<code>&lt;= ' '<\/code> or <code>== '\/'<\/code>) gates the regex call. ~80% of positions have no whitespace, so <code>Skip()<\/code> becomes a single comparison.<\/li>\n\n\n\n<li><strong>Flat memoization.<\/strong> <code>map[int]map[Parser]*MemoEntry<\/code> flattened to <code>[]map[Parser]*MemoEntry<\/code>. One array index replaces one hash lookup for the outer dimension.<\/li>\n\n\n\n<li><strong>Lr object pooling.<\/strong> Left-recursion markers recycled via <code>sync.Pool<\/code>.<\/li>\n\n\n\n<li><strong>Slice pre-allocation.<\/strong> <code>And\/Kleene\/ManyParser<\/code> pre-allocate result slices instead of growing from nil.<\/li>\n\n\n\n<li><strong>Specialized regex matchers.<\/strong> MemCP&#8217;s 9 regex patterns are recognized at construction time and replaced with hand-written <code>func(string) int<\/code> matchers. Identifiers use a 256-bit bitmap lookup. Integers and floats use byte loops. String bodies use a backslash-skip loop. <code>regexp.Regexp<\/code> is never compiled for these patterns.<\/li>\n<\/ol>\n\n\n\n<p>Every one of these replaces an expensive operation with a cheaper one. The replacement is faster AND allocates less.<\/p>\n\n\n\n<p>And the best: The user interface didn&#8217;t change. The user still declares his regex-based grammar and we replace common patterns in the regex with faster parsers.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">v2.1.17-18: The Allocation Trap (1.4-2.9x regression)<\/h3>\n\n\n\n<p>These versions pursued allocation reduction further \u2014 replacing the inner <code>map[Parser]*MemoEntry<\/code> with a linked list through slab-allocated entries (v2.1.17), then packing the per-position data into a uint32 with multi-slab indirection (v2.1.18).<\/p>\n\n\n\n<p>Allocations dropped from ~630 to ~30 (v2.1.17) to ~7 (v2.1.18). Real-world performance went in the opposite direction.<\/p>\n\n\n\n<p>What went wrong:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>MemoEntry grew from 40 to 64 bytes<\/strong> (added <code>rule<\/code> and <code>nextMemo<\/code> fields for the linked list). Entries per cache line dropped from 1.6 to 1.0.<\/li>\n\n\n\n<li><strong>Linked-list traversal replaced hash lookup.<\/strong> Go&#8217;s small maps (2-5 entries) use contiguous bucket arrays \u2014 one cache line load. The linked list follows pointers to slab entries allocated in DFS parse-tree order, not grouped by position. Each hop risks a cache miss.<\/li>\n\n\n\n<li><strong>Double indirection in v2.1.18.<\/strong> Every memo access became <code>s.memoSlabs[idx>>8][idx&amp;0xFF]<\/code> \u2014 two pointer dereferences plus arithmetic, where a direct <code>*MemoEntry<\/code> pointer was one.<\/li>\n\n\n\n<li><strong>The regression scales with input size.<\/strong> At 3 rows, the working set fits in L1\/L2 regardless \u2014 v2.1.18 is 1.8x slower. At 10,000 rows, the working set exceeds L2, every cache miss pays full penalty \u2014 v2.1.18 is 2.9x slower.<\/li>\n<\/ul>\n\n\n\n<p>The per-row cost confirms this. v2.1.16&#8217;s drops from 181 to 73 us\/row (fixed overhead amortizes, marginal cost is low). v2.1.18&#8217;s stays flat at ~212-227 us\/row \u2014 per-access overhead that scales with the working set.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">v2.1.19: Combining the Best (new fastest)<\/h3>\n\n\n\n<p>Reverted to v2.1.16&#8217;s map-based memoization. Kept the additive features from v2.1.17-18 that don&#8217;t affect memo access patterns:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Combinator buffer reuse<\/strong> (v2.1.17): And\/Kleene\/ManyParser reuse a <code>[]T<\/code> buffer across non-recursive calls, detected by a depth counter. Avoids per-call slice allocations.<\/li>\n\n\n\n<li><strong>FindStringIndex<\/strong> (v2.1.17): <code>MatchRegexp<\/code> uses <code>FindStringIndex<\/code> instead of <code>FindStringSubmatch<\/code>, avoiding a <code>[]string<\/code> allocation per regex match.<\/li>\n\n\n\n<li><strong>CharMap dispatch<\/strong> (v2.1.18): <code>OrParser.SetCharMap()<\/code> installs a first-byte lookup table. After skipping whitespace, only the 1-2 sub-parsers matching the current byte are tried instead of all alternatives.<\/li>\n\n\n\n<li><strong>NoMemo bypass<\/strong> (v2.1.18): <code>KleeneParser<\/code> and <code>ManyParser<\/code> gained a <code>NoMemo<\/code> flag that calls <code>Match()<\/code> directly, bypassing the memo table entirely.<\/li>\n\n\n\n<li><strong>Scanner.Reset()<\/strong> (v2.1.18): Reinitializes a Scanner for a new input, reusing allocated slices.<\/li>\n<\/ul>\n\n\n\n<p>The result is 7-11% faster than v2.1.16 at 100+ rows. The buffer reuse, FindStringIndex, and CharMap dispatch each shave a small amount off the per-row cost without affecting cache behavior.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Lesson: Allocations Are a Proxy Metric<\/h2>\n\n\n\n<p>The allocation count across versions tells a misleading story:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Version<\/th><th>Allocs (synthetic, 200 tuples)<\/th><th>Real-world speed<\/th><\/tr><\/thead><tbody><tr><td>v2.1.15<\/td><td>~6,800<\/td><td>slowest<\/td><\/tr><tr><td>v2.1.16<\/td><td>~630<\/td><td>fast<\/td><\/tr><tr><td>v2.1.17<\/td><td>~30<\/td><td>slower<\/td><\/tr><tr><td>v2.1.18<\/td><td>~7<\/td><td>slowest since baseline<\/td><\/tr><tr><td>v2.1.19<\/td><td>~630<\/td><td><strong>fastest<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>v2.1.19 has the same allocation count as v2.1.16 and is the fastest version. v2.1.18 has 99% fewer allocations and is the slowest since baseline.<\/p>\n\n\n\n<p>What matters is <strong>memory access latency<\/strong>, not allocation count. Go&#8217;s allocator and GC handle short-lived objects efficiently \u2014 allocating a small map costs ~200ns, and the GC reclaims it for free if it dies young. Replacing that map with a slab-allocated linked list eliminates the allocation but makes every subsequent lookup slower by polluting cache lines with scattered, pointer-chased data.<\/p>\n\n\n\n<p>For hot-loop data structures, spatial locality matters more than allocation count. A small Go map with 2-5 entries has excellent locality: one contiguous bucket array, O(1) lookup, fits in a cache line. That&#8217;s hard to beat with hand-rolled data structures.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Does the Grammar Even Need Memoization?<\/h2>\n\n\n\n<p>Packrat memoization guarantees O(n) by preventing redundant re-parsing during backtracking. But SQL is essentially LL(1) \u2014 each construct is determined by its first token. The top-level dispatch (<code>SELECT<\/code> vs <code>INSERT<\/code>) benefits from memoization: ~50 lookups per query. The Kleene\/Many inner loop processing value tuples or column lists does not \u2014 it advances position-by-position, never revisiting. Every memo entry created in the loop is written once and never read.<\/p>\n\n\n\n<p>For a 10,000-row INSERT, the inner loop creates tens of thousands of memo entries that are never reused \u2014 megabytes of write-only data occupying cache lines.<\/p>\n\n\n\n<p>The <code>NoMemo<\/code> flag addresses this. When set, Kleene\/ManyParser call <code>Match()<\/code> directly, bypassing <code>applyRule()<\/code> and the memo table entirely. This turns the inner loop into pure recursive descent \u2014 no memo overhead, no cache pollution.<\/p>\n\n\n\n<p>A synthetic <code>SELECT col_0, col_1, ... col_N FROM t<\/code> benchmark inside go-packrat shows the effect clearly:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Columns<\/th><th>NewScanner<\/th><th>Reset<\/th><th>Reset + NoMemo<\/th><\/tr><\/thead><tbody><tr><td>3<\/td><td>47 allocs \/ 4.6 us<\/td><td>35 allocs \/ 2.0 us<\/td><td>21 allocs \/ 1.5 us<\/td><\/tr><tr><td>10<\/td><td>89 allocs \/ 8.9 us<\/td><td>77 allocs \/ 4.3 us<\/td><td>21 allocs \/ 2.1 us<\/td><\/tr><tr><td>50<\/td><td>329 allocs \/ 34 us<\/td><td>317 allocs \/ 19 us<\/td><td>21 allocs \/ 4.9 us<\/td><\/tr><tr><td>200<\/td><td>1229 allocs \/ 122 us<\/td><td>1217 allocs \/ 71 us<\/td><td>21 allocs \/ 15 us<\/td><\/tr><tr><td>1000<\/td><td>6029 allocs \/ 507 us<\/td><td>6018 allocs \/ 393 us<\/td><td>21 allocs \/ 74 us<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>With NoMemo, the allocation count is <strong>constant at 21<\/strong> regardless of column count \u2014 the column list loop allocates nothing. The 21 allocs are structural overhead (Scanner construction, the SELECT and FROM keyword lookups). At 1000 columns, NoMemo is 6.9x faster than the default path and scales perfectly linearly: 74 ns\/column with no cache degradation.<\/p>\n\n\n\n<p>The same applies to INSERT value lists, Kleene repetitions, and any other Many\/Kleene loop over non-left-recursive sub-parsers. Combined with map-based memoization for the structural parse, this gives:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Structural parse<\/strong> (statement dispatch, clauses): memoization, small working set, benefits from caching<\/li>\n\n\n\n<li><strong>Repetition loops<\/strong> (value tuples, column lists): direct calls, zero memo overhead<\/li>\n<\/ul>\n\n\n\n<p>NoMemo and Scanner.Reset() require caller-side changes (setting the flag, pooling Scanners). These are available in v2.1.19 but not yet activated in MemCP. When they are, the per-query allocation count will drop further without the cache locality penalty.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What&#8217;s Next<\/h2>\n\n\n\n<p>The last remaining source of per-query allocations is the callback interface. The merge callback <code>func(string, ...T) T<\/code> has no access to per-query state. MemCP&#8217;s callbacks allocate result objects on the heap because there&#8217;s no way to pass a per-query arena through the parser.<\/p>\n\n\n\n<p>Planned: a <code>UserData any<\/code> field on Scanner, accessible from callbacks. This lets the callback use a per-query arena allocator, replacing individual heap allocations with bulk arena allocation.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Outlook: JIT for the Parser<\/h2>\n\n\n\n<p>The next logical step is to apply the MemCP JIT not only to scan loops, but also to the <strong>packrat parser objects<\/strong> themselves.<\/p>\n\n\n\n<p>Today, the parser is built from combinators (<code>And<\/code>, <code>Or<\/code>, <code>Many<\/code>, <code>Atom<\/code>) that are interpreted at runtime. Even after heavy optimization, this is still a generic dispatch mechanism.<\/p>\n\n\n\n<p>In the future, frequently used grammar paths \u2014 such as <code>INSERT<\/code>, <code>SELECT<\/code>, value lists, or identifiers \u2014 could be compiled at runtime into specialized matching loops. This would allow:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>embedding static rule structure as immediates<\/li>\n\n\n\n<li>turning <code>Or<\/code> alternatives into direct first-byte dispatch<\/li>\n\n\n\n<li>emitting <code>Many<\/code> as compact native loops<\/li>\n\n\n\n<li>eliminating unnecessary memo lookups<\/li>\n<\/ul>\n\n\n\n<p>The result would be a transition from a flexible combinator interpreter to a specialized native state machine \u2014 without sacrificing the runtime extensibility that makes the packrat approach powerful.<\/p>\n\n\n\n<p>The parser speedup was step one.<br>A parser JIT could be step two.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Try It<\/h2>\n\n\n\n<p>go-packrat is open source under GPLv3 (with custom licensing available):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>go get github.com\/launix-de\/go-packrat\/v2@v2.1.19<\/code><\/pre>\n\n\n\n<p>MemCP is at <a href=\"https:\/\/github.com\/launix-de\/memcp\">github.com\/launix-de\/memcp<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Why Parser Speed Matters for OLTP MemCP is an in-memory, MySQL-compatible database. It handles two kinds of workloads: analytics queries that scan millions of rows, and OLTP queries \u2014 small INSERTs, UPDATEs, DELETEs, and SELECTs coming in rapid fire from applications. For OLTP \/ analytics queries, parsing a 1000-character SELECT takes milliseconds against a query&#8230;<\/p>","protected":false},"author":2,"featured_media":8124,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_editorskit_title_hidden":false,"_editorskit_reading_time":0,"_editorskit_is_block_options_detached":false,"_editorskit_block_options_position":"{}","_uag_custom_page_level_css":"","footnotes":""},"categories":[129,128],"tags":[],"class_list":["post-8123","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-memcp","category-programming","single-item"],"featured_image_urls_v2":{"full":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06.png",1536,1024,false],"thumbnail":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-150x150.png",150,150,true],"medium":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-300x200.png",300,200,true],"medium_large":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-768x512.png",751,501,true],"large":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-1024x683.png",751,501,true],"1536x1536":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06.png",1536,1024,false],"2048x2048":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06.png",1536,1024,false],"trp-custom-language-flag":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-18x12.png",18,12,true],"xs-thumb":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-64x64.png",64,64,true],"appku-shop-single":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-620x500.png",620,500,true]},"post_excerpt_stackable_v2":"<p>Why Parser Speed Matters for OLTP MemCP is an in-memory, MySQL-compatible database. It handles two kinds of workloads: analytics queries that scan millions of rows, and OLTP queries \u2014 small INSERTs, UPDATEs, DELETEs, and SELECTs coming in rapid fire from applications. For OLTP \/ analytics queries, parsing a 1000-character SELECT takes milliseconds against a query that runs for seconds or minutes. Nobody cares. For OLTP, every microsecond in the parser is a microsecond added to every request. A web application doing 10,000 INSERT\/s spends more time parsing SQL than executing it if the parser is slow. The queries that matter&hellip;<\/p>\n","category_list_v2":"<a href=\"https:\/\/launix.de\/launix\/en\/category\/memcp\/\" rel=\"category tag\">MemCP<\/a>, <a href=\"https:\/\/launix.de\/launix\/en\/category\/programming\/\" rel=\"category tag\">Programming<\/a>","author_info_v2":{"name":"Carl-Philip H\u00e4nsch","url":"https:\/\/launix.de\/launix\/en\/author\/carli\/"},"comments_num_v2":"0 comments","uagb_featured_image_src":{"full":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06.png",1536,1024,false],"thumbnail":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-150x150.png",150,150,true],"medium":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-300x200.png",300,200,true],"medium_large":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-768x512.png",751,501,true],"large":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-1024x683.png",751,501,true],"1536x1536":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06.png",1536,1024,false],"2048x2048":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06.png",1536,1024,false],"trp-custom-language-flag":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-18x12.png",18,12,true],"xs-thumb":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-64x64.png",64,64,true],"appku-shop-single":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2026\/02\/a1c7a010-a57c-4a4f-9fe0-dc31ab06bc06-620x500.png",620,500,true]},"uagb_author_info":{"display_name":"Carl-Philip H\u00e4nsch","author_link":"https:\/\/launix.de\/launix\/en\/author\/carli\/"},"uagb_comment_info":0,"uagb_excerpt":"Why Parser Speed Matters for OLTP MemCP is an in-memory, MySQL-compatible database. It handles two kinds of workloads: analytics queries that scan millions of rows, and OLTP queries \u2014 small INSERTs, UPDATEs, DELETEs, and SELECTs coming in rapid fire from applications. For OLTP \/ analytics queries, parsing a 1000-character SELECT takes milliseconds against a query...","_links":{"self":[{"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/posts\/8123","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/comments?post=8123"}],"version-history":[{"count":2,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/posts\/8123\/revisions"}],"predecessor-version":[{"id":8126,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/posts\/8123\/revisions\/8126"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/media\/8124"}],"wp:attachment":[{"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/media?parent=8123"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/categories?post=8123"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/tags?post=8123"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}