Architects of Consensus is a series dedicated to shining a light on the unseen figures who are developing and advancing the most battle-tested, highly performant blockchain stack in the world — EOSIO. These are the intrepid explorers whose lives have traced the elliptic curves and merkle trees of blockchain technology to reveal its many potentials, and who have returned to share that knowledge with us.



“Any sufficiently advanced technology is indistinguishable from magic.”

— Arthur C. Clarke.

Navigating the periphery between machine and human language is a kind of magic. As seen through the eyes of a blockchain developer who dreams in compiler theory, time seems to dilate, and previously unseen modes of communication become visible. In my conversations with Bucky Kittinger, I came to realize that for him, the bleeding edge is more like a familiar stroll through a garden of forking paths—and he seems quite at home there.

Our video call springs to life, and Bucky is seated in front of a wall of beautiful guitars. His manner of speaking gives him a distinctly Southern US vibe, and after we talk a while it becomes apparent that he’s equally at home in the farmlands of Virginia or on the frontiers of blockchain technology, which is to say, he seems just as likely to tangle with a bull as ride a light cycle into some Tron-esque digital landscape.

From his early inspiration by way of The Legend of Zelda, his thoughts on the EOS community, potential support for ARM architecture, or how EOS can contend in the “final leg of the arms race of app execution” with NatiVM, this conversation with Bucky runs a wide gamut. 

Bucky is a consensus level blockchain architect and engineer who has been exploring the edges between bare metal and highly performant blockchain solutions since 2016. Having completed his B.S in Computer Science at Radford University, Bucky began pursuing his Ph.D. in Compiler Theory and Computer Architecture at Virginia Tech. He later went on to join Block.one to architect and build the next generation of blockchain solutions. Bucky now works for the EOS Network Foundation as Principal Engineer where he is working on the rebirth and reboot phase of the underlying technology and aiming to push EOS even further as a best-in-class Web3 smart contract platform and blockchain ecosystem.

Fundamentally, what are the problems that blockchain presents?

So for me personally, one of the main problems that persist in blockchain is the general practicality of the current solutions, the latency problems, the throughput problems, user experience, and beyond. This means that most people cannot realize a lot of the ideas and new solutions because of the extreme limitations of the amount or cost of CPU you have, the price of RAM, and those types of concerns.

And that’s the more significant area of things that I would like to see drilled to the floor; these are performance, latency, and compiler issues. Again, minimizing the cost of these things so that you can have these elaborate and sophisticated smart contracts exist, and they’re more than just say, transfer token a to account b, transfer b to account c.

There’s a lot more to be had at the blockchain level that just doesn’t exist right now because it’s too expensive, or you can’t do it in time. So the programmatic models that exist are just horrible; you have to mentally do gymnastics to figure out, for instance—if I cut the state here and save this, and then come back in and make a reentrant type of action, how will this work?

And from what we’ve seen, not very well, right? You’ll probably end up making bugs and losing millions of dollars. I’d like to see that layer be a lot easier to contend with, where it’s just a lot easier to build out these more complex and complicated systems and less about doing the bare minimum.

What are some of the more notable projects that you’ve worked on?

So at Block.one, I created EOSIO.CDT, EOSVM, and generally worked on a lot of the foundational stuff on EOS. I also contributed to EOSIO contracts quite a bit, and pretty much dabbled here, there, and everywhere as most of the core engineers did. Beyond that, probably my Ph.D. was the only other thing of note. Writing papers on compiler theory, computer architecture, benchmark synthesis, and that kind of thing.

“latency and throughput have historically been a big problem regarding uptake and adoption. So for me, DPOS allowed for EOS and other chains built off of it to be the most performant and scalable types of solutions because we can optimize for hardware more than any other chain can.”

What initially led you to DPOS and EOSIO, and what kept you around?

So the thing that initially led me to DPOS was its utilitarian promises. The biggest thing I’ve always wanted to see was the ubiquity of blockchain and where we can take it as a technical product for real everyday people to reap the benefits of this beautiful technology. 

As mentioned above, latency and throughput have historically been a big problem regarding uptake and adoption. So for me, DPOS allowed for EOS and other chains built off of it to be the most performant and scalable types of solutions because we can optimize for hardware more than any other chain can. We can make good decisions and optimizations towards what things we want to support and what types of things we don’t. 

Whereas Ethereum and many other chains can’t because their model is not one of a few people that produce blocks, their model is that they have many people that validate blocks. So they end up designing the system to be able to run on a potato. That is, it can run on your smartphone or an HPC sitting in a National Lab for high-performance computing. So they can’t really make a lot of assumptions.

Did you say—potato?

(laughs) Yes. It is an expression for very cheap hardware. Think of a potato with wires coming out of it.

On the other hand, EOS can say they will have relatively good hardware. So, therefore, if we optimize for these things, we know that we can get CPU costs down to a minimum, optimize for particular types of hardware extensions, lower RAM costs, or design around I/O operations. 

Almost all these other chains end up in this race to the bottom anyway. I.e., the hardware that is used is expensive, custom, or well out of the reach of everyday people. So, by leaning into good hardware we are optimizing for reality instead of a theoretical pipedream.

So why EOS? What’s the core value proposition of EOS or what do you hope to make it?

So why EOS? Well, it started with a good community and an incredible amount of potential, and I still think that community is there right? It’s just buried and somewhat disillusioned right now, you know, and I think if we can get back in some degree to fulfilling some of the deep promises that were heralded at the beginning, I think that we could have a fantastic base to build off of.

To some degree it’s kind of like building a company and you already have people that were at least interested in it at some point. There’s a benefit that we don’t have to go and bootstrap an entire new ecosystem.

Now, I think that there’s a lot that needs to be done in terms of building back that trust. That’s exactly what’s being done right now. As long as we continue doing things that people want and hopefully get the chain moving in a good direction where people are making money, then the rest will follow.

Plus for me personally, I have unfinished business around a lot of the tech and that technical direction that I still want to see happen that has been in my mind for literal years at this point.

I still see the current landscape of blockchain and crypto as incredibly nascent and fringe, but once we can close that gap for ubiquity, we will be able to see the actual benefits of the technology and how far it can go.

You mentioned that maybe EOS could fulfill some of the initial promises. How would you characterize what you felt those promises were?

What I felt was: one of them was the driving for ubiquity.  When saying ubiquity, I mean blockchain for all, blockchain for the everyday general internet user to the most crypto fanatic user.  I still see the current landscape of blockchain and crypto as incredibly nascent and fringe, but once we can close that gap for ubiquity, we will be able to see the actual benefits of the technology and how far it can go.

At Block.one I think that we kind of went in too many different directions. I would still say that EOS is poised to be one of the more stable chains and I think it is one of the most brimming with potential. But I would say concretely: scalability, a truely top tier professional environment to create on, and the safest and most profitable chain around are all areas that I want to see remediated.

What’s the crypto scene or developer community like where you live?

I live in Christiansburg, which is right near Blacksburg, Virginia. The crypto scene is reasonably diverse. You have a mix of diehard ideological crypto people, the usual suspects of tinfoil hat people, the purely technical people and a general smattering of all the above.

The main thing that kind of drives that is Virginia Tech. It’s rural, you know, it’s farming and stuff. So there’s not a whole lot to do other than farm or get into creating blockchains.

Because Virginia Tech is a reasonably large university it garners a relatively diverse group of people. I think that these are still very nascent days in the Blacksburg, Christiansburg area in terms of a big crypto community, but the people that are there are diehards. You have some that came into the Block.one scene that were already in the area, and you have people that came to Block.one from out of state and just stuck around. And so now they’re driving their own initiatives and stuff, which is cool.

What’s your story? What was your journey to blockchain like?

So, I grew up in Christiansburg, Virginia on a small cattle farm. I got into computer science because I played Legend of Zelda on a friend’s NES and I absolutely loved that game. And I thought, I need to figure out a way to make this one day for myself, you know, like literally make the Legend of Zelda game!

I grew up pretty poor. So a lot of those initial days of learning to program were at school computers during my lunch break or after school by myself.  Also, I couldn’t afford any kind of books so the early years were hard won battles.

This was around late elementary or early middle school. So that was probably like early to mid 90’s. I went to school and they had a big mainframe that connected to Virginia Tech at the time. I was exploring that and playing the crappy games where the letters and numbers fall, and that kind of stuff. It had the big dot matrix printer that went gzhhhh gzhhhh gzhhhh, really ancient technology for the time. 

But after getting bored on that machine and searching around the filesystem everyday I found a weird basic interpreter and started figuring out how to program in a basic language, and that was the beginning of the end for me. (laughs)

In middle school they upgraded the library’s computer lab, so I started using that and found QBasic packed away deep in the system32 folder somewhere, and I started writing QBasic programs which was fun. You know back then the internet was still very wild west in terms of trying to find anything. So I had a teacher who gave me a book, that I still have, on how to program QBasic.

I made all different kinds of random stuff. The problem though, is that it was horrendously slow, so me trying to make Legend of Zelda wasn’t gonna happen.

So that’s why you got into compiler theory wasn’t it, because you were like, this is way too slow!

Yeah, it was way too slow, and that really probably was it, because I was constantly driving to have more performance and more performance. I kind of went off the deep end and jumped from QBasic into assembler, because that was the only other thing that was on the computer—Microsoft assembler—it didn’t have a C compiler or a C++ compiler so I was like well, obviously the thing to go to is assembly from basic. So then I started writing assembly and shortly after that I found Linux, and found out that it had C and C++ compilers on that, and started learning C, and kind of went down that route.

How did your family regard all of this?

So they didn’t really see it as a great thing. It was just a nerdy thing that I was doing, you know, that Bucky did. “He’ll grow up and do real work one day”, “he’ll do that for a while”, “don’t worry it’s just a phase, he’ll grow out of it.” I didn’t ever grow out of it.

So I ended up working for my dad when I got out of high school, and I did that for four years and saved up really almost every penny that I had and then put myself through undergraduate and then into post-grad immediately, which is kind of a weird path to take.

Where did you do your undergrad?

I did my undergrad at Radford University. It’s like 5 minutes from Christiansburg.

How did you end up going directly from an undergrad in computer science to doing your doctoral work?

When I was doing my undergrad I ended up befriending the computer architecture professor and the compiler professor (Dr. Ian Barland and Dr. Ned Okie). The professors were like, you need to continue on, have you thought about doing your PhD work? And I said, No, not really. You know, like—I’m a farm kid from Virginia, this is already enough, this is crazy compared to anybody else in my family—so no. And he’s like, yeah you should do your PhD. So that was really the spur, right, that was the only thing it took.

So I went and found the professor currently over in the architecture and compiler side at Virginia Tech and had one interview with him and he said, “Ok, can you show up on Monday?” And I said, “ok, cool, that’ll work!”

So for what, three years, you just ate and slept compiler theory?

Yeah. And computer architecture, because another part of the work that I did was designing processor extensions, designing code transformations and optimizations for different kinds of wacky architectures and stuff like that.

All right. Since you brought up compiler theory, give us compiler theory in a nutshell. 

(Laughter)

Ok, but seriously, what is the tangible outcome of doing a PhD in compiler theory?

So, the one thing that I focused on was how to do computation in hyper embedded IOT frameworks for energy harvesting systems, or very power fault-heavy types of systems, and guarantee complete execution at some point. Which is another reason that whenever I went and talked with Dan (Larimer) at the time, that I saw some degree of parallels with hyper embedded systems, and the smart contract execution layer. And I’ve had some of the same visions and wants for a lot of this stuff since that initial meeting.

Could you explain what a power fault is, and why that concept might be relevant to blockchain?

If you have an energy harvesting system or any kind of system where the power will cut out randomly, when you resume you will start back at the beginning and if you don’t have enough energy there to continue forward, that will effectively kill the computation that’s occurring. 

So the issue with those types of things, is a lot of these small sensors or nano IOT devices, they’re going to be running on these little chunks of program flow, and you don’t want them to just cut off because you’ll wind up in this state where it’s like Sisyphus, right? Where it’s just constantly trying to run the boulder up the hill. And then it just dies out and it can never get the boulder up the hill so you never really finish this part of the thing that you’re trying to do.

It’s called stagnation. It’s a big problem. So I solved that in like three or four different ways, some with hardware, some with completely compiler directed stuff. 

So for me, the goal there was to hopefully bring some of that over into EOSIO and we never really got there, really because there was too much to do and not enough time to do it. Also, there were a lot of domain specific optimizations that I had designed at Virginia Tech that I always wanted to bring over that never fully happened either.

Because to me, the tangible thing there that really could be done is effectively making longer running processes that exist on the blockchain. Start to do more normal types of development patterns on the blockchain. So less of a niche and hard thing to do where everybody has to constantly think in these really abstract models of, well, I only have, you know, .5 seconds to do this computation. How do I do that? How do I mentally go in and chunk up this program to effectively run in this very small window?

So for me, one of the things I also wanted to get to was allowing for these longer running computations to exist just completely naturally with the program, right? There’s nothing the engineer has to do there.

So, if we look at stagnation and the small buffers of energy that are available to run programs and smart contract execution, the parallels are they both will abruptly cut off “randomly” during control flow. For smart contracts this has been worked around—to varying degrees of effectiveness—with explicit stateful patterns and saving state in a table or passing back with an action return and trying to create reentrant actions or long-running—logically—actions.

So, why work on compilers?

I just love low level systems, and compilers. So it’s easier if you’re like a really low level guy, you either get into operating systems or compilers and I chose compilers because to me they’re like a magical thing. You take this human readable thing and turn it into something that’s incredibly optimized and just cool. 

Tell us a bit about your work on the EOSIO CDT (contract development toolkit) and why you think it needs an overhaul.

Yeah, so EOSIO.CDT, which now in Mandel is just CDT, is the contract development toolkit. So it’s the set of toolchains, the compiler linker, all of the things that create libraries, those kinds of things for EOSIO Web Assembly (Wasm), optimizations, language extensions for generating code specific to EOSIO, abstracting over certain blockchain constructs, and then library support for debugging and testing and the actual fundamental libraries for working on the EOSIO blockchain. 

So when I created the CDT, I set out a few goals for it. Those goals were to be incredibly performant, safe, secure, easy to use, and I really wanted to get there with extended ability in terms of community involvement.

There are a few big issues that were introduced with the CDT. First, I didn’t want it to be just C++. At the time, that was a big hard stamp in the ground saying—C++ is the only thing ever going to exist, ever—So I had to live with that, but I was kind of hoping we could migrate the project at some point to be a bit more abstract. The other issues were around utility and ease of use. And once again, we just never really got there. The focus at the time was not on tooling.

The majority of CDT work was done by me in my own free time for three years, so it was really hard to get any degree of momentum on doing anything in that space. I was always trying to play catch-up as opposed to getting some of these more foundational things off the ground.

“…I think the people just in general realize that if you have an ecosystem—it’s the ecosystem—right? It’s not just one small component that you supply and then that’s it. You have to start to build up that ecosystem, and if you don’t do that, then things just effectively evaporate.”

So you must be pretty excited to be part of the ENF and able to take a second crack at this with a team that wants you to really spend energy on it. What is that going to look like for you?

So yes, definitely. That was one of the main things that I originally talked with Yves (La Rose) about when I came over to the ENF was this concept of rebirth and reboot. Fixing things that we did wrong or just didn’t do it all the first go around and kind of having a parallel universe that takes off in a good direction. Where four years from this point, we’ll be in a completely different spot as opposed to where we are now. So I was like, yes, this is definitely the place to go to. I was extremely excited about the prospect of actually having a good degree of people listening and people interested in the same kinds of things that I was focused on before. 

It’s a little more all encompassing, and I think the people just in general realize that if you have an ecosystem—it’s the ecosystem—right? It’s not just one small component that you supply and then that’s it. You have to start to build up that ecosystem, and if you don’t do that, then things just effectively evaporate. And that’s kind of where we’ve gotten to.

There was an analogy that I came up with during my time at Block.one that I still love, which is; if you go and build the fastest car in the world, but nobody drives it, then you’ve effectively built the slowest car in the world. It is effectively driving zero miles an hour. So to me, that was the problem. We were focusing on a lot of these other ancillary things that had limited appeal or utility.

So, tell us about ANother TransLator Environment and Runtime, or ANTLER, your proposed successor to the CDT.

One of the big differences between ANTLER and CDT is that it’s not C++ only, all the time. It will still have C++, but it will also have first class citizenship for C, Go and possibly Rust out of the box. Rust might come in a fast follow. I’ve been playing around with a lot of these things behind the scenes for a little while.

So the thing with ANTLER is I wanted to sit down and design an ecosystem of tooling and a way of doing this where you can have a degree of, I don’t want to say centrality, but a set of things effectively built out for people to buy into.

Just because someone goes and creates a compiler, to me that’s just not really a good enough thing. You have to make a really good optimizing compiler. And that is a lot more effort than I think any team wants to go and pick up themselves.

And we need that, but then what happens after, right? They need debugger support, they need profiler support, they need all these other qualities of life that have to exist, which means that they would have to go and build them themselves, which is once again a complete nightmare.

So one of the big goals of ANTLER is to abstract the concept of the code generation, linkage, optimizer and debugger, profiler support, etc. So that these smaller teams can create a new language easily without any of the headache of the rest.

It would also have a packaging system and much simpler build system to abstract over these languages and allow for easy construction of more complex systems and hopefully get more code available to developers via community owned packages.

One of the larger issues today is that smart contract devs have to become very well acquainted with C++, which is not easy. The language as a whole has a lot of traps and you can very easily shoot yourself in the foot. So, by extending the set of languages supported you run a better chance of a developer being more knowledgeable about one of these languages and understanding of the pitfalls and traps associated therein.

The hope is they are already experts in some language. So if you can try to capture them in whatever they’re an expert in and leverage that, then they have a much better chance of creating a very diverse smart contract ecosystem. As opposed to them coming with their knowledge of language X and then having to become an expert in Wasm, the dwarf formats for debugging, experts in static analysis, experts in linkage.

One of the bigger issues faced with not being an expert in the fundamental languages is bugs and losing people millions of dollars. So I think at the end of the day, the goal here is to allow for people to build these systems and buy into the debuggers, profilers, high optimizing compilers, and abstract away the bits that they don’t care about. The only thing they care about is the fundamental language side of it. 

So once you have that, then things hopefully start to become fairly easy from that point, where onboarding people is a matter of, ok, what language are you good at? Ok, you’re good at OCaml, well the community wrote an OCaml compiler. Put this language spec as OCaml and it will go and pull that and then your compiler will just work. That’s the high level (and lofty) goal of ANTLER.

So, speaking of architecture, you architected EOS-VM, which uses Wasm (Web Assembly), I hear that you are looking to create a new system there as well?

Yeah, so the thing that I want to do is what I’m calling the NatiVM runtime, which effectively uses x86-64, (AMD64 is the other name that’s used for it) as an intermediate representation, like how we use Wasm now. It’s the core architecture or instruction set that is used in Intel and in AMD devices currently.

There are several reasons for moving towards x86-64. One is to buy into 40 years of development. From a developer standpoint, we don’t need to go and reinvent the wheel on all these different types of tooling, static analysis, dynamic analysis, profiling tools. We can just buy into industry-standard technologies. It’s already been written. We don’t need to go and do any of that. We can just inherit it for free. So that’s one big component.

The other component is foundational performance problems with Wasm, and some issues that I perceive with Wasm in terms of movement and momentum of the standard going forward.  

Another issue with Wasm is the upper bound to the performance that we can get out of it, and that’s directed from the compilers, the toolchains that exist, and it’s also directed from just overall limitations of what we can do with that kind of abstracted architecture at the bare minimum of what we can produce out of that. 

So for x86-64, it fits the model of a register-based three address system very well, meaning that the instruction set has (maximally) a result, operand 1 and operand 2, and multiple addressing modes for utilizing registers and memory locations. Wasm fits the model of a 0-address stack-based system, so there’s a loss of detail between those, and then there’s also the fact that because of that stack-based system, register allocation is difficult, and being able to do anything there to convert that over is a very complex analysis and one that still you can’t really ever get below a certain level of performance because of something called alias analysis issues (lack of type information).

“The thing is, the speed at which blockchain tech is moving is too fast for the speed at which Wasm is moving forward.”

So Wasm doesn’t use registers?

Yeah. So with Wasm, everything in it is effectively a memory operation. It either operates on an immediate that is encoded with the instruction or it is a memory location, which itself is either a memory address or a synthetic stack or global index. So in terms of x86-64, we have these things called registers, which are one-cycle operations to go and add things to them, read from them, write to them. That’s where a high amount of performance comes from with a modern architecture (also where it can be lost with having to spill those registers to fill with new data and not optimally allocating for them).

So, utilizing the registers of the system is always a big thing to optimize for. The problem is that Wasm has no concept of that. So the thing that ends up occurring is that those things end up becoming just memory reads and writes, which without going into a lot of technical detail with respect to compilers and why they don’t like that, ends up causing some issues for optimizing those things away.

Yet another issue is the standard itself is moving slower than mud, in terms of getting any of these extensions, so that’s been another constant problem within Wasm or our uptake of Wasm. These extensions would be things like SIMD, exceptions, dynamic modules, etc.

There was always this kind of mentality that we gained a lot by staying somewhat close to vanilla Wasm. The problem there is that we would always have to wait on the spec to be updated and ratified and move forward, and even then there’s stuff that’s not even in the spec that we really want there to be for performance reasons, for developer reasons, for node operator reasons.

There’s a list of things that we should do, but we can’t because those things are just not part of Wasm, and if we have to continually wait on these things, then the blockchain itself is going to end up dying before we ever get to implementing them. The thing is, the speed at which blockchain tech is moving is too fast for the speed at which Wasm is moving forward.

So your proposed solution with NatiVM would be a new standard?

Yeah.

Could you give a brief intro to NatiVM, and I’m also curious how you see it being used by the blockchain industry at large.

One of the biggest things is that it will be an open standard that we supply, and then hopefully in the future other people have input and influence into the direction of that. 

The bigger components are that it will be a proper subset of x86-64. In the future, we would (hopefully) also have deterministic ARM support, but the thing for right now to focus on is just the deterministic x86-64. Anybody building tooling or building library support, they won’t have to go and write or come up with—how are these instructions encoded or decoded?—that kind of thing is already there. They can just buy into the pre-existing stuff that exists for x86-64, for the runtimes and validation systems. The foundational compiler and runtime are the only ones that have to focus on what is this subset. The binary format of that is also in scope as to what this standard is and what sections this app itself has, and the different sections that we want, and want to maintain. 

So that’s one component, and then the second is more the VM (virtual machine) layer. What does the memory management look like? How does the memory management system work with the operating system in a very efficient way? How can we guarantee that?

I’ve got things in there for safety and security for the app developers. For example, things like automated really low cost stack canaries, so if they do miswrite something, it should fail, so they don’t get attacked online or somebody swindling them out of a bunch of money.

“…we can guarantee determinism at the root level of execution.”

How far along are you with NatiVM in terms of, do you see it all? 

Yeah, currently I don’t have any blind spots, everything is there. I have a lot there as to what the bits and pieces and the abstracted thing looks like but I’m still getting down exactly what instructions are going to be included in V1.

I know what I want there. But, I would also like to include some initial SIMD. So, I am weighing it out. The other issue with Wasm is for things like, say, SIMD instructions, which are heavily used in cryptographic functions, the maximum bit width we can achieve with Wasm is 128 bit. So there are “standard” Intel or AMD extensions like SSE and these have 128, 256, and 512 bit operations. In addition to other specific extensions that blockchain can leverage.

The goal would be: we should be able to buy into a lot of the very, very optimized cryptographic functions that people have spent an exorbitant amount of time, handwriting for these very esoteric SSE instructions for x86-64. There’s a lot of value in doing that for performance and for soundness of implementations. It’s already been written and tested.

The thing I’ve seen is if a language does support those kinds of things, they support them in an abstract manner, meaning that the language has to expose those as something special and then there’s a lot of work there to kind of work around those things. And then it’s sort of an upheaval of—the next version has to support this—you know, there’s a lot of back and forth there.

One of the bigger issues with Wasm that I haven’t touched on is determinism. We can’t truly guarantee determinism with most of the “backends” that we have, not truly. This is because we are not validating the generated code that is run on the physical machine. But a wonderful consequence of NatiVM is that we validate the x86-64 as purely deterministic x86-64. This means that we can guarantee determinism at the root level of execution.

“The thing about NatiVM is that I see it as the final leg of the arms race of app execution—we still have other areas to improve around DB and those things—but we can ensure that we are driving the costs of execution to the bare minimum.”

So supporting SSE and other extensions seems like kind of a big deal considering this is blockchain.

Yeah, so these are instructions that typically, in computer science, you would call vector instructions. They allow for very large data sets and effectively you do one operation over multiple data chunks within it.

So if you’ve got four 64 bit values packed into that 256 bit register, it might do one operation, but operate over these four things at once with the same operation. It’s also the same way a GPU works. A GPU will effectively do one operation over a very large register and the thing that it’s operating over is a big matrix or vector.

So those instructions are incredibly useful in cryptographic systems because you do a lot of vector operations. So with having those, you can have like 4, 5, or 10x performance increases. So there’s a lot of utility to having those.

Secondly you have Intel, as we move forward, adding more and more crypto instructions to their instruction set, that is never going to be added to something like Wasm.

So, let’s say the base hardware itself doesn’t actually support an instruction, that’s perfectly fine. We will effectively inject a version of the “instruction” that will take over that operation and emulate the semantics, so if the hardware doesn’t support 512 bit operations—that’s not a problem. It can support 256 bit operations. We just split those in half and we do the same things that you normally would.

Another thing to note is just the fact that compiling to regular x86-64 instructions is a huge thing in itself. I don’t think it would take a lot to get someone to see that we should be able to compile apps and run them with an overhead comparable to that of pure natively built code running outside this NatiVM-managed system.

Maybe you could talk a little bit more about ARM architecture and how x86-64 can be translated into binary.

Yeah, sure. So obviously ARM is a big gleaming thing that’s going to happen at some point. I think it’s still a few years off, but I think that people wouldn’t want the prospect to be that ok, well we can only run on x86-64 architecture, right? It’s pretty limiting.

So during the validation phase of the deterministic x86-64 app we will first validate, and then use a constant time “shotgun” binary rewriter to transform the x86-64 to ARM64. This will be a reasonably stupid rewriter as we only support a subset and can make far more assumptions than a general purpose system can. Alexis Engelke, et al. showed a good affinity of optimization between conversion from x86-64 to ARM64 with minimal performance losses.

That seems like kind of a big deal, does that future-proof EOS to some extent?

It definitely should. The thing about NatiVM is that I see it as the final leg of the arms race of app execution—we still have other areas to improve around DB and those things—but we can ensure that we are driving the costs of execution to the bare minimum, we can own the standard ourselves so that we can adapt and change as time moves on, plus ensuring we have compatibility layers with other architectures should help to future-proof EOS.

What advice would you give to those wanting to build on blockchain?

The advice I would give would be a bit more high level than most would probably expect. Focus on building the application correctly first. I.e. make sure that the overall logic of the application is solid and well tested. After you know that the smart contract and DApp code are correct, you can focus on optimizing it for CPU, RAM or NET concerns.

But, first and foremost is just start the project. I know many people who say to me, “I have this great idea, but…” these people are good engineers, but the peculiarities of blockchain development stop them from even starting the project. So, ask plenty of questions and just try things.


It was a real pleasure talking with you, Bucky! Here’s to a future proof EOS!


The pleasure was mine; being able to talk about some of these things has been fun.  I think that EOS will have a very proof filled future (laughs).


If you enjoyed this installment of Architects of Consensus, know that we will yet go deeper into the mysteries of blockchain as seen through the eyes of the world class developers continuously working to advance EOS & EOSIO.

You can read the previous article in the Architects of Consensus series here:


Join our social channels and get involved in the conversation! Keep your eye on the blog, and join our mailing list to be the first to learn when articles such as these are published!


EOS Network

The EOS Network is a 3rd generation blockchain platform powered by the EOS VM, a low-latency, highly performant, and extensible WebAssembly engine for deterministic execution of near feeless transactions; purpose-built for enabling optimal web3 user and developer experiences. EOS is the flagship blockchain and financial center of the EOSIO protocol, serving as the driving force behind multi-chain collaboration and public goods funding for tools and infrastructure through the EOS Network Foundation (ENF).

EOS Network Foundation

The EOS Network Foundation (ENF) is a not-for-profit organization that coordinates financial and non-financial support to encourage the growth and development of the EOS Network. The ENF is the hub of the EOS Network, harnessing the power of decentralization as a force for positive global change to chart a coordinated future for EOS.