{"id":4877,"date":"2023-02-08T08:46:28","date_gmt":"2023-02-08T07:46:28","guid":{"rendered":"https:\/\/launix.de\/launix\/?p=4877"},"modified":"2023-02-08T11:26:34","modified_gmt":"2023-02-08T10:26:34","slug":"walnut-ai-a-cpu-optimized-ai-neuronal-network","status":"publish","type":"post","link":"https:\/\/launix.de\/launix\/en\/walnut-ai-a-cpu-optimized-ai-neuronal-network\/","title":{"rendered":"Walnut AI &#8211; a CPU-optimized AI neuronal network"},"content":{"rendered":"<p>Our brain is shaped like a walnut. And that&#8217;s for a reason.<\/p>\n\n\n\n<!--more-->\n\n\n\n<p>The whiskings and bulges of our brain are a clever way of nature to provide a very interesting structure to process data.<\/p>\n\n\n\n<p>At the intersection of two bulges, data can be interchanged while in the bulge itself, the &#8220;thinking&#8221; is kind of separated.<\/p>\n\n\n<div class=\"wp-block-image is-style-editorskit-shadow\">\n<figure class=\"alignright size-large is-resized\"><a href=\"https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1024x860.png\" alt=\"\" class=\"wp-image-4879\" width=\"256\" height=\"215\" srcset=\"https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1024x860.png 1024w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-300x252.png 300w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-768x645.png 768w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-600x504.png 600w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1536x1290.png 1536w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image.png 1702w\" sizes=\"auto, (max-width: 256px) 100vw, 256px\" \/><\/a><\/figure><\/div>\n\n\n<p>Let me introduce Walnut AI: an AI pattern that works on a massive scale of fixed-size small matrices. Why fixed-size matrices? Because compilers can loop-unroll all algorithms on the matrix as long as the size is small and fixed. This makes the network make use of SSE and AVX extensions of modern CPUs.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-large is-resized\"><a href=\"https:\/\/en.wikipedia.org\/wiki\/Bitonic_sorter#\/media\/File:Batcher_Bitonic_Mergesort_for_eight_inputs.svg\" target=\"_blank\" rel=\"noreferrer noopener\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1-1024x607.png\" alt=\"\" class=\"wp-image-4880\" width=\"256\" height=\"152\" srcset=\"https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1-1024x607.png 1024w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1-300x178.png 300w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1-768x456.png 768w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1-600x356.png 600w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1-1536x911.png 1536w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1-840x500.png 840w, https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/image-1.png 1920w\" sizes=\"auto, (max-width: 256px) 100vw, 256px\" \/><\/a><figcaption class=\"wp-element-caption\"><a href=\"https:\/\/en.wikipedia.org\/wiki\/Bitonic_sorter#\/media\/File:Batcher_Bitonic_Mergesort_for_eight_inputs.svg\" class=\"ek-link\">License: CC BY-SA 3.0<\/a><\/figcaption><\/figure><\/div>\n\n\n<p>Of course, small matrices can not process that much inputs as required in a typical AI scenario. But this is no obstacle. One can simply connect multiple small matrices to a big network like in the bitonic sorter network. The bitonic pattern is able to shuffle data around in a manner that at least every value can be compared to every other value. It has a depth complexity of O(log\u00b2N) (in the parallel case) and a requirement of O(N*log\u00b2N) sorter nodes.<\/p>\n\n\n\n<p>We already trained a single walnut node to learn sequences of numbers like you would in a language model like <strong>ChatGPT<\/strong>. This is the output of a 16&#215;16 Matrix learning the pattern 1,1,0,1,1,0,1,1,0,1,1,0:<\/p>\n\n\n\n<pre style=\"white-space: pre;\">in  = 1.0000 0.5856 0.7723 0.7887 0.4882 0.2105 0.1576 0.0354 0.3160 0.0931 0.9431 0.9918 0.0214 0.4702 0.0218 0.0164 \nout = 0.9997 0.9470 0.8375 0.9348 0.9596 0.6570 0.3475 0.1244 0.6792 0.1485 0.9865 0.9987 0.1671 0.5460 0.1714 0.1164 \nerr = 0.0003 0.0013 0.0000 -0.0003 0.0004 0.0001 -0.0004 0.0010 0.0002 0.0004 0.0014 0.0015 -0.0009 -0.0003 0.0009 -0.0011 \n\nin  = 1.0000 0.9470 0.8375 0.9348 0.9596 0.6570 0.3475 0.1244 0.6792 0.1485 0.9865 0.9987 0.1671 0.5460 0.1714 0.1164 \nout = 0.0007 0.9565 0.9029 0.9496 0.9265 0.6060 0.1951 0.0385 0.6349 0.1823 0.9919 0.9998 0.0632 0.6765 0.0597 0.0489 \nerr = -0.0007 -0.0000 -0.0000 0.0003 -0.0010 -0.0002 0.0013 0.0012 0.0006 -0.0013 -0.0004 -0.0001 0.0005 -0.0001 -0.0004 -0.0000 \n\nin  = 0.0000 0.9565 0.9029 0.9496 0.9265 0.6060 0.1951 0.0385 0.6349 0.1823 0.9919 0.9998 0.0632 0.6765 0.0597 0.0489 \nout = 0.9998 0.5857 0.7724 0.7888 0.4879 0.2104 0.1576 0.0354 0.3158 0.0931 0.9432 0.9918 0.0214 0.4704 0.0218 0.0164 \nerr = 0.0002 0.0009 -0.0005 -0.0004 0.0003 -0.0015 0.0001 0.0003 -0.0009 -0.0006 0.0000 -0.0013 0.0004 -0.0011 0.0003 0.0002 \n\nin  = 1.0000 0.5857 0.7724 0.7888 0.4879 0.2104 0.1576 0.0354 0.3158 0.0931 0.9432 0.9918 0.0214 0.4704 0.0218 0.0164 \nout = 0.9997 0.9470 0.8377 0.9348 0.9596 0.6569 0.3475 0.1245 0.6793 0.1484 0.9865 0.9987 0.1672 0.5461 0.1711 0.1164 \nerr = 0.0003 -0.0003 0.0008 0.0011 -0.0000 -0.0013 0.0001 0.0001 -0.0002 -0.0005 0.0001 0.0004 -0.0003 -0.0011 -0.0003 0.0012 \n\nin  = 1.0000 0.9470 0.8377 0.9348 0.9596 0.6569 0.3475 0.1245 0.6793 0.1484 0.9865 0.9987 0.1672 0.5461 0.1711 0.1164 \nout = 0.0007 0.9565 0.9030 0.9495 0.9265 0.6056 0.1951 0.0386 0.6348 0.1824 0.9919 0.9998 0.0632 0.6765 0.0596 0.0489 \nerr = -0.0007 0.0004 0.0005 0.0001 0.0005 0.0003 -0.0009 -0.0003 -0.0003 -0.0002 0.0002 0.0015 -0.0008 -0.0002 -0.0005 0.0003 \n\nin  = 0.0000 0.9565 0.9030 0.9495 0.9265 0.6056 0.1951 0.0386 0.6348 0.1824 0.9919 0.9998 0.0632 0.6765 0.0596 0.0489 \nout = 0.9998 0.5855 0.7724 0.7888 0.4882 0.2103 0.1577 0.0354 0.3157 0.0931 0.9433 0.9918 0.0214 0.4706 0.0218 0.0164 \nerr = 0.0002 -0.0002 -0.0000 -0.0006 0.0005 -0.0001 0.0003 -0.0011 0.0000 -0.0006 0.0008 0.0005 0.0002 0.0001 0.0010 0.0003 \n\nin  = 1.0000 0.5855 0.7724 0.7888 0.4882 0.2103 0.1577 0.0354 0.3157 0.0931 0.9433 0.9918 0.0214 0.4706 0.0218 0.0164 \nout = 0.9997 0.9470 0.8378 0.9348 0.9596 0.6568 0.3474 0.1245 0.6793 0.1484 0.9866 0.9987 0.1671 0.5464 0.1712 0.1164 \nerr = 0.0003 0.0012 0.0008 -0.0003 -0.0004 -0.0004 0.0002 -0.0007 0.0007 -0.0006 0.0006 0.0000 0.0009 0.0002 -0.0005 0.0004 \n\nin  = 1.0000 0.9470 0.8378 0.9348 0.9596 0.6568 0.3474 0.1245 0.6793 0.1484 0.9866 0.9987 0.1671 0.5464 0.1712 0.1164 \nout = 0.0007 0.9565 0.9030 0.9495 0.9266 0.6056 0.1952 0.0386 0.6349 0.1825 0.9919 0.9998 0.0632 0.6768 0.0596 0.0489 \nerr = -0.0007 -0.0000 0.0000 0.0002 -0.0004 0.0003 -0.0004 0.0012 0.0001 -0.0006 0.0012 0.0008 -0.0015 -0.0010 -0.0012 -0.0001 \n\nin  = 0.0000 0.9565 0.9030 0.9495 0.9266 0.6056 0.1952 0.0386 0.6349 0.1825 0.9919 0.9998 0.0632 0.6768 0.0596 0.0489 \nout = 0.9998 0.5857 0.7725 0.7889 0.4883 0.2103 0.1576 0.0354 0.3157 0.0932 0.9433 0.9918 0.0214 0.4709 0.0218 0.0164 \nerr = 0.0002 0.0007 0.0001 -0.0013 0.0001 -0.0003 -0.0000 0.0001 -0.0002 0.0003 0.0006 0.0002 -0.0005 0.0008 -0.0008 -0.0014 \n\nin  = 1.0000 0.5857 0.7725 0.7889 0.4883 0.2103 0.1576 0.0354 0.3157 0.0932 0.9433 0.9918 0.0214 0.4709 0.0218 0.0164 \nout = 0.9997 0.9470 0.8378 0.9348 0.9596 0.6567 0.3474 0.1244 0.6794 0.1485 0.9866 0.9987 0.1671 0.5466 0.1711 0.1163 \nerr = 0.0003 -0.0005 0.0007 0.0005 -0.0001 -0.0005 -0.0013 0.0005 -0.0006 0.0003 -0.0006 -0.0000 0.0001 0.0000 -0.0005 -0.0011 \n\nin  = 1.0000 0.9470 0.8378 0.9348 0.9596 0.6567 0.3474 0.1244 0.6794 0.1485 0.9866 0.9987 0.1671 0.5466 0.1711 0.1163 \nout = 0.0007 0.9565 0.9030 0.9495 0.9266 0.6055 0.1951 0.0385 0.6347 0.1823 0.9919 0.9998 0.0632 0.6772 0.0596 0.0489 \nerr = -0.0007 -0.0003 -0.0000 0.0003 0.0009 0.0002 0.0008 -0.0005 0.0001 -0.0014 0.0003 -0.0000 0.0000 0.0006 -0.0012 -0.0003 \n\nin  = 0.0000 0.9565 0.9030 0.9495 0.9266 0.6055 0.1951 0.0385 0.6347 0.1823 0.9919 0.9998 0.0632 0.6772 0.0596 0.0489 \nout = 0.9998 0.5856 0.7726 0.7887 0.4883 0.2103 0.1577 0.0354 0.3156 0.0932 0.9434 0.9918 0.0214 0.4712 0.0217 0.0164 \nerr = 0.0002 0.0006 -0.0013 -0.0008 -0.0001 0.0012 -0.0004 -0.0012 0.0008 -0.0005 0.0009 -0.0010 -0.0004 -0.0007 -0.0005 -0.0007 \n\nin  = 1.0000 0.5856 0.7726 0.7887 0.4883 0.2103 0.1577 0.0354 0.3156 0.0932 0.9434 0.9918 0.0214 0.4712 0.0217 0.0164 \nout = 0.9997 0.9470 0.8378 0.9347 0.9596 0.6568 0.3474 0.1244 0.6793 0.1483 0.9866 0.9987 0.1673 0.5470 0.1710 0.1162 \nerr = 0.0003 0.0008 -0.0009 -0.0002 0.0006 -0.0005 0.0012 0.0002 0.0002 -0.0005 -0.0004 -0.0004 -0.0002 -0.0005 0.0003 0.0001 \n\nin  = 1.0000 0.9470 0.8378 0.9347 0.9596 0.6568 0.3474 0.1244 0.6793 0.1483 0.9866 0.9987 0.1673 0.5470 0.1710 0.1162 \nout = 0.0007 0.9565 0.9030 0.9494 0.9265 0.6056 0.1952 0.0385 0.6344 0.1823 0.9920 0.9998 0.0633 0.6773 0.0595 0.0489 \nerr = -0.0007 -0.0001 -0.0005 0.0002 -0.0011 0.0009 0.0019 0.0003 -0.0014 -0.0008 -0.0005 0.0005 0.0002 0.0006 0.0004 0.0008 \n\nin  = 0.0000 0.9565 0.9030 0.9494 0.9265 0.6056 0.1952 0.0385 0.6344 0.1823 0.9920 0.9998 0.0633 0.6773 0.0595 0.0489 \nout = 0.9998 0.5857 0.7725 0.7886 0.4878 0.2103 0.1578 0.0354 0.3153 0.0931 0.9434 0.9918 0.0214 0.4712 0.0217 0.0164 \nerr = 0.0002 -0.0012 -0.0005 0.0002 -0.0009 -0.0011 -0.0002 -0.0005 -0.0003 -0.0006 -0.0009 0.0008 -0.0006 0.0008 -0.0007 0.0003 \n\nin  = 1.0000 0.5857 0.7725 0.7886 0.4878 0.2103 0.1578 0.0354 0.3153 0.0931 0.9434 0.9918 0.0214 0.4712 0.0217 0.0164 \nout = 0.9997 0.9469 0.8378 0.9347 0.9596 0.6567 0.3476 0.1243 0.6790 0.1482 0.9866 0.9987 0.1673 0.5473 0.1710 0.1162 \nerr = 0.0003 0.0005 -0.0007 -0.0001 -0.0009 -0.0007 -0.0006 0.0003 0.0005 0.0008 -0.0004 -0.0008 -0.0004 -0.0006 0.0001 -0.0006 \n\nin  = 1.0000 0.9469 0.8378 0.9347 0.9596 0.6567 0.3476 0.1243 0.6790 0.1482 0.9866 0.9987 0.1673 0.5473 0.1710 0.1162 \nout = 0.0007 0.9564 0.9030 0.9494 0.9264 0.6053 0.1955 0.0385 0.6341 0.1821 0.9920 0.9998 0.0632 0.6774 0.0595 0.0488 \nerr = -0.0007 0.0002 -0.0004 0.0007 -0.0001 0.0011 -0.0004 0.0009 -0.0004 -0.0011 -0.0011 0.0016 -0.0001 -0.0005 -0.0004 -0.0001 \n\nin  = 0.0000 0.9564 0.9030 0.9494 0.9264 0.6053 0.1955 0.0385 0.6341 0.1821 0.9920 0.9998 0.0632 0.6774 0.0595 0.0488 \nout = 0.9998 0.5853 0.7725 0.7886 0.4880 0.2104 0.1580 0.0354 0.3153 0.0931 0.9434 0.9918 0.0215 0.4712 0.0217 0.0164 \nerr = 0.0002 0.0007 0.0014 0.0003 -0.0002 0.0006 0.0013 0.0005 -0.0003 0.0002 -0.0001 0.0005 0.0014 0.0007 0.0002 0.0009 \n\nin  = 1.0000 0.5853 0.7725 0.7886 0.4880 0.2104 0.1580 0.0354 0.3153 0.0931 0.9434 0.9918 0.0215 0.4712 0.0217 0.0164 \nout = 0.9997 0.9469 0.8376 0.9347 0.9596 0.6568 0.3479 0.1244 0.6791 0.1482 0.9866 0.9987 0.1674 0.5473 0.1711 0.1162 \nerr = 0.0003 0.0002 -0.0009 0.0004 0.0004 0.0002 -0.0007 -0.0007 -0.0007 0.0005 0.0003 -0.0008 -0.0000 -0.0010 0.0007 0.0002 \n\nin  = 1.0000 0.9469 0.8376 0.9347 0.9596 0.6568 0.3479 0.1244 0.6791 0.1482 0.9866 0.9987 0.1674 0.5473 0.1711 0.1162 \nout = 0.0007 0.9564 0.9029 0.9494 0.9265 0.6058 0.1955 0.0385 0.6340 0.1822 0.9920 0.9998 0.0632 0.6778 0.0595 0.0488 \nerr = -0.0007 0.0005 -0.0017 0.0007 0.0003 0.0008 -0.0008 -0.0002 -0.0005 0.0003 -0.0005 0.0012 -0.0002 0.0001 -0.0004 -0.0001 \n\nlearn result:\n-9.1130 -4.6952 10.3669 1.1419 -6.2698 -10.0712 -2.9436 -4.3510 -1.4037 -10.0334 5.1694 4.8659 -6.3518 5.0172 -18.3967 -18.4768 6.8743\n2.5601 -0.8417 0.3259 -0.4397 -0.1577 0.9781 -0.4246 0.5796 0.1861 0.7129 -0.1788 0.7377 0.0229 -0.5120 0.4339 0.8746 0.3554\n0.8910 0.8962 0.1511 -0.6213 -0.3017 0.7336 0.7339 -0.0504 -0.5286 -1.0849 0.0232 -0.7474 -0.5905 1.2326 1.2608 0.8690 0.8541\n1.6852 0.0285 -0.6908 0.9214 -0.2172 0.4253 -0.0445 -0.8371 -0.0912 1.0301 0.9175 0.4505 -0.2547 0.3568 0.5283 0.3138 -0.7397\n2.5996 -0.9552 -0.6625 0.6841 0.2753 -0.9938 -0.5806 0.8085 -0.6441 0.4657 1.0774 -0.1523 0.3393 0.3682 0.0272 0.8907 0.3349\n1.5433 0.3355 -0.2034 0.5916 -0.1975 -0.5763 0.9588 0.1515 -0.9263 1.0299 0.0268 -0.2792 -0.1076 -0.7625 0.7556 -0.1802 -0.5422\n0.3006 -0.4480 -0.6562 -0.4864 -0.1389 -0.4163 -0.7516 0.8076 -1.1816 -0.4384 -0.0586 0.8537 0.8005 0.9533 0.4370 0.0824 -0.3859\n0.5443 -0.4398 -0.4003 -0.8587 0.1024 -0.7692 -0.7797 0.3381 -0.9221 0.6265 -0.7598 0.3027 -1.0755 0.8635 -0.8534 0.0740 -0.7462\n1.2046 -0.6341 0.3949 0.2949 -0.0074 1.0891 0.3266 0.6937 -1.2099 -0.5910 -0.3737 -0.8537 -1.1373 -0.6702 -0.2335 1.0057 1.0412\n0.8039 0.3254 -1.1183 -0.9075 0.9645 -0.6656 0.8144 -0.9859 0.7908 -0.8362 0.3005 -1.1398 -0.8621 0.6471 0.1135 -0.7613 -1.1906\n1.9425 -0.0765 0.4977 0.6575 -0.2365 0.3545 0.3037 0.0675 0.3144 -0.6991 -0.1284 0.3520 -0.1917 0.8875 0.4458 1.2798 0.7816\n3.3990 0.5631 -1.0584 0.7851 0.9132 1.2111 1.0663 0.6461 0.9690 0.6089 1.0361 1.3116 0.5554 -0.0895 -0.8398 0.9198 -0.3761\n1.1812 -1.1888 0.2389 -0.7478 0.0856 -0.9482 0.4038 -0.1398 -0.5496 -0.2332 -1.1034 -0.0459 -0.5768 0.3246 -0.1756 0.9763 -0.4574\n0.6902 0.4523 0.4499 -0.6451 0.1693 0.1208 1.0342 0.0904 -0.0714 0.7592 -0.0598 -0.7098 -0.3367 0.2257 1.1379 -0.2981 -0.2852\n1.0571 -0.6162 -0.5068 -1.0256 -0.1029 -0.9256 0.1746 -1.0149 -0.3888 0.0819 -0.5433 -0.1432 -0.4409 -0.7199 0.1160 0.0509 0.2938\n1.0808 -0.9413 0.1215 -1.2361 -0.0094 0.1238 -0.0850 -0.3479 -0.3395 -1.0339 -1.3508 -0.2073 0.4107 -0.9436 -0.7933 -0.6040 0.4724\n<\/pre>\n\n\n\n<p>Every triple of <code>input<\/code>, <code>output<\/code> and <code>error<\/code> is a 16-vector where the first element is our input\/output while the rest of the 15 variables are its <em>internal thinking<\/em> &#8211; the walnut bulge. Every output of a bulge node is feedbacked into the very same slot in the input while the IO variables are passed in and out.<\/p>\n\n\n\n<p>Every training step includes adjusting the matrix weights according to the error vector for the desired output. In the same step, the error vector for the input is constructed so errors can be passed down the network. Every error will induce a slight change of the matrix weights in the <em>right<\/em> direction (we only assure that the sign of the error is correct and the value of the error will not induce an overswing).<\/p>\n\n\n\n<p>The bencharked performance on a AMD Ryzen 9 3900X (zen2) is around 166,000 16&#215;16-matrix learn propagations\/sec on a single CPU core.<\/p>\n\n\n\n<p>Our next steps are to create a network of these matrices and start giving them fodder. The use case of walnut AI is sequenced data like language models or request-response patterns for classification.<\/p>","protected":false},"excerpt":{"rendered":"<p>Our brain is shaped like a walnut. And that&#8217;s for a reason.<\/p>","protected":false},"author":2,"featured_media":4878,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_editorskit_title_hidden":false,"_editorskit_reading_time":7,"_editorskit_is_block_options_detached":false,"_editorskit_block_options_position":"{}","_uag_custom_page_level_css":"","footnotes":""},"categories":[128],"tags":[],"class_list":["post-4877","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-programming","single-item"],"featured_image_urls_v2":{"full":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-scaled.jpg",2560,1235,false],"thumbnail":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-150x150.jpg",150,150,true],"medium":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-300x145.jpg",300,145,true],"medium_large":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-768x370.jpg",751,362,true],"large":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-1024x494.jpg",751,362,true],"1536x1536":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-1536x741.jpg",1536,741,true],"2048x2048":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-2048x988.jpg",2048,988,true],"trp-custom-language-flag":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-scaled.jpg",18,9,false],"xs-thumb":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-64x64.jpg",64,64,true],"appku-shop-single":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-scaled.jpg",620,299,false]},"post_excerpt_stackable_v2":"<p>Our brain is shaped like a walnut. And that&#8217;s for a reason. The whiskings and bulges of our brain are a clever way of nature to provide a very interesting structure to process data. At the intersection of two bulges, data can be interchanged while in the bulge itself, the &#8220;thinking&#8221; is kind of separated. Let me introduce Walnut AI: an AI pattern that works on a massive scale of fixed-size small matrices. Why fixed-size matrices? Because compilers can loop-unroll all algorithms on the matrix as long as the size is small and fixed. This makes the network make use&hellip;<\/p>\n","category_list_v2":"<a href=\"https:\/\/launix.de\/launix\/en\/category\/programming\/\" rel=\"category tag\">Programming<\/a>","author_info_v2":{"name":"Carl-Philip H\u00e4nsch","url":"https:\/\/launix.de\/launix\/en\/author\/carli\/"},"comments_num_v2":"0 comments","uagb_featured_image_src":{"full":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-scaled.jpg",2560,1235,false],"thumbnail":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-150x150.jpg",150,150,true],"medium":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-300x145.jpg",300,145,true],"medium_large":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-768x370.jpg",751,362,true],"large":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-1024x494.jpg",751,362,true],"1536x1536":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-1536x741.jpg",1536,741,true],"2048x2048":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-2048x988.jpg",2048,988,true],"trp-custom-language-flag":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-scaled.jpg",18,9,false],"xs-thumb":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-64x64.jpg",64,64,true],"appku-shop-single":["https:\/\/launix.de\/launix\/wp-content\/uploads\/2023\/02\/hd-wallpaper-6805922-scaled.jpg",620,299,false]},"uagb_author_info":{"display_name":"Carl-Philip H\u00e4nsch","author_link":"https:\/\/launix.de\/launix\/en\/author\/carli\/"},"uagb_comment_info":0,"uagb_excerpt":"Our brain is shaped like a walnut. And that&#8217;s for a reason.","_links":{"self":[{"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/posts\/4877","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/comments?post=4877"}],"version-history":[{"count":3,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/posts\/4877\/revisions"}],"predecessor-version":[{"id":4886,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/posts\/4877\/revisions\/4886"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/media\/4878"}],"wp:attachment":[{"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/media?parent=4877"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/categories?post=4877"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/launix.de\/launix\/en\/wp-json\/wp\/v2\/tags?post=4877"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}