Rubinius Metrics meet InfluxDB part II

We live in a containerized world now and after seeing the complexity of setting all up in the first part Mr Joe Eli McIlvain kindly created this this awesome Docker image with everything ready for seeing some neat graphs about the Rubinius VM.


If you are on Mac OS X like me you can install docker with the Homebrew package manager. It's worth to mention that Docker setup in OS X these days is a bit trickier because it depends on Linux kernel features. In order to run it in non-Linux operating systems you have to install boot2docker (which in essence is a Linux based VirtualBox virtual machine in which Docker will run). Having said that, here is an outline of the whole process:

  1. Install VirtualBox.
  2. Install boot2docker and Docker.
  3. Setup boot2docker and Docker.
  4. Setup and start the Rubinius Influxdb-Grafana Docker container.
  5. Run Rubinius enabling the StatsD metrics.

Installing VirtualBox

You can use the Homebrew-cask brew extension to install applications distributed as binaries. You can install it using brew itself:

$ brew install caskroom/cask/brew-cask

And after that use Homebrew-cask to install VirtualBox:

$ brew cask install virtualbox

Alternatively you can download VirtualBox for your operating system from here or install it using your package manager.

Installing boot2docker and Docker

Here is the command for installing both Docker and boot2docker:

$ brew install boot2docker

If you are wondering why there isn't an explicit section in how to install Docker is because it is a dependency of boot2docker in Homebrew. However, you can check Docker's official installation guide for detailed information about how to install Docker in your OS or try installing it with your package manager.

Setting up boot2docker and Docker

On a Linux based operating system you have to start the docker daemon and of course, you can skip this step.

On Mac OS X the first time you have to initialize the boot2docker virtual machine:

$ boot2docker init

The first time the ISO image will be fetched, so probably you will have to wait a bit. If everything goes well you are now able to start the boot2docker VM:

$ boot2docker start

You should see something like this:

Waiting for VM and Docker daemon to start...
Writing /Users/goyox86/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/goyox86/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/goyox86/.boot2docker/certs/boot2docker-vm/key.pem

To connect the Docker client to the Docker daemon, please set:
    export DOCKER_HOST=tcp://
    export DOCKER_CERT_PATH=/Users/goyox86/.boot2docker/certs/boot2docker-vm
    export DOCKER_TLS_VERIFY=1

The last three lines are important because the Docker client uses it to connect to the Docker daemon. You can either copy and paste these lines in your current terminal session or put them in your shell's init file.

Setting up the rubinius/influxdb-grafana Docker container.

When everything is in place you can go to the homepage of the Rubinius InfluxDB-Grafana container read a bit the instructions to start it or just be lazy like me and run:

If you are on a Linux based OS:

 docker run -d \
  -p 8125:8125/udp \
  -p 8086:8086 \
  -p 80:80 \

Or a Mac OS X one:

docker run -d \
  -p 8125:8125/udp \
  -p 8086:8086 \
  -p 80:80 \

Notes for Mac OS X users:

Why an environment variable on Mac OS X?

From rubinius/influxdb-grafana container site:

"Because docker relies on features of the Linux kernel, it does not run containers natively in Mac OS X - it hosts containers inside of a Linux VM called boot2docker. One consequence of this is that ports mapped to the docker host from containers are not mapped to localhost of OS X, but to the boot2docker host. Therefore, in all of the above commands, OS X users should replace localhost with the IP address given by running boot2docker ip."

We have to be able to know the boot2doecker virtual machine ip address in order to allow Grafana (a client side application) to hit the dockerized InfluxDB server.

How to get that boot2docker ip?

$ boot2docker ip

Run Rubinius enabling the StatsD metrics

Enable Rubinius StatsD metrics emitting for your Rubinius process like this:

On Linux based systems:

        -Xsystem.metrics.statsd.server=localhost:8125" \
  rbx # (your app here)

On Mac OS X:

        -Xsystem.metrics.statsd.server=" \
  rbx # (your app here)

Where "" is the boot2docker VM ip address.

Some screenshots while running Rubinius Benchmarks:

RBX Dashboard.png

RBX Dashboard 2.png

RBX Dashboard 3.png

Happy Graphing!

Matz's Ruby Developers Don't Use RubySpec and It's Hurting Ruby

Matz's Ruby team released version 2.2.0 a few days ago. This release fails or errors on multiple RubySpecs, and at least one spec causes a segmentation fault, which is a serious error that can sometimes be a security vulnerability. All of these issues could have been easily avoided if MRI developers used RubySpec.

Full output of MRI 2.2.0 running RubySpec

Ruby is an extremely complex language. It has no formal definition. I created the RubySpec project to provide this definition and to help implementations check correctness. MRI developers have released versions 1.9.3, 2.0.0, 2.1.0 and now 2.2.0 without checking correctness with RubySpec, nor have they created RubySpecs to define the Ruby behaviors they have implemented for these versions, to the detriment of all Ruby programmers and businesses who depend on Ruby.

As of today, I'm ending the RubySpec project. The personal cost of RubySpec over the past eight years, as well as the cost to Rubinius are greater than any benefit derived from the project.

Independent of RubySpec, the Ruby community needs to address this issue. The MRI developers claim to own Ruby, so it's their responsibility to clearly define the semantics of Ruby and ensure the correctness of their implementation.

Instead of contributing to RubySpec, the MRI developers write their own tests. That is their choice to make. As the issues here are complex, I want to provide some historical context and explain why the MRI tests are inadequate.

Ruby Is What Ruby Does

When I first began contributing to Rubinius in late 2006, I knew two things: I wanted Rubinius to be successful, and I wanted it to accurately reflect MRI behavior so that developers could simply switch to Rubinius to run their Ruby programs.

At the time (around version 1.8.3-4), MRI had almost no tests. There were two previous projects to attempt to write tests for Ruby: RubyTest and BFTS. Both of these projects were also very limited and used ad hoc facilities for handling platform differences.

My approach to the specs was simple: The way a Ruby program behaves is the definition of Ruby. So, we began to comprehensively write small snippets of Ruby code to show how Ruby behaved when run by MRI. Then we would match that behavior in Rubinius.

Writing tests for a language as complex as Ruby is tremendously difficult. Even for a single implementation, there are platform issues like endianness and machine word size. Now we needed to also account for different implementations and, with the beginning of Ruby 1.9 development, completely different versions with syntax incompatibilities.

I started developing a consistent set of facilities to handle all these challenges as well as writing a separate spec runner that was compatible with RSpec 2.0 syntax so the specs could be run by RSpec. The custom spec runner used very simple Ruby features to enable nascent Ruby implementations (like Rubinius) to start running specs with very little preparation.

The Birth of RubySpec

On May 10, 2008, just before RailsConf, I created the RubySpec project, by extracting the specs we had been writing for Rubinius, in the hope that MRI and other projects would contribute to it and use it. Some people had already started questioning why MRI did not use the specs.

Later that year, at RubyConf 2008, I gave a talk titled, What Does My Ruby Do about RubySpec. Matz and several other MRI developers attended. Immediately after my talk, contributors to Rubinius sat down with Matz and other MRI developers to discuss their effort to create an ISO specification for Ruby. We asked whether RubySpec could be part of the specification but were told that it was not appropriate to include it.

Fast forward to the pending release of MRI 1.9.2. The transition from 1.8.7 to 1.9 had been torturous. There were a number of behaviors introduced in 1.9.0 and 1.9.1 that were reverted in 1.9.2. The Ruby community was very reluctant to adopt 1.9 because of the confusion about 1.9 behavior, instability of 1.9.0 and 1.9.1, and the cost of migrating code. The MRI developers had started writing more tests, but there was still almost no participation in RubySpec.

That changed suddenly when Yuki Sonoda (@yugui), as the 1.9.2 release maintainer, stated that she would not release 1.9.2 until it passed RubySpec. There was a flurry of activity that all but ceased when 1.9.2 was released.

No release maintainer since then has asserted that requirement. MRI developers have written many MRI tests in the last several years. As described below, there are still Ruby features for which there are no MRI tests, but they are writing tests. However, MRI developers have essentially written no RubySpecs for the 2.0, 2.1, or 2.2 features they have implemented.

The Problem With MRI Tests

Not too long ago, prominent Rubyist Erik Michaels-Ober asked me, "What's wrong with MRI's tests?" I was surprised by his question.

Since Erik, who is an experienced Ruby and Rails developer, asked this question, I imagine other people have wondered the same thing.

From the perspective of adequately defining Ruby semantics and providing a mechanism for other Ruby implementations to check correctness, here are the problems with MRI's tests:

  1. They include MRI implementation details. One very difficult aspect of specifying Ruby involves the boundary between a Fixnum and a Bignum. This impacts almost every method in Array, String, Range, etc. Since MRI is written in C, the machine types supported by C leak into Ruby behavior. One place this happens is the accepted range of values that can be used to index an Array. Since MRI uses a C long, there are some values that are bigger than a Fixnum that can be used to index an Array. For an implementation that isn't dependent on C, these hidden semantics expressed in MRI tests like these for Array make the tests unsuitable.

  2. They include MRI bug details. Rather than improving the general quality and coverage of the tests, MRI adds specific bug conditions to the tests. These bugs are irrelevant to other implementations. Moreover, littering the tests with specific bug cases instead of improving the overall quality makes the test suite harder and harder to maintain over time.

  3. They have no facility for distinguishing versions. The tests are specific to whatever version the MRI code supports. However, it's not that simple. MRI develops features by pushing new commits directly into their Subversion repository trunk (which is somewhat like the master branch under Git), and then periodically pulling certain specific commits into a release branch. This makes it extremely difficult to track the MRI features that are intended for an upcoming version.

  4. They have no facility for distinguishing non-MRI platform issues. As stated in the first point above, there are MRI-specific semantics that are not shared by other implementations. The MRI tests assume these semantics and this makes the tests much more difficult to use.

  5. They include behavior that Matz has explicitly said is undefined. Matz has explicitly said that modifying a collection while iterating is undefined behavior. Yet, the MRI tests include this behavior. Another Ruby implementation must exclude these tests if, for example, they result in an infinite loop. But since MRI can change the tests at any time and add more of these types of tests, this requires constantly checking which tests exhibit undefined behavior and omitting them.

  6. They are not discoverable. The tests are roughly divided into test/ and test/ruby, but locating the tests for a specific class and method is not possible through any standard convention. One must constantly grep the code and attempt to determine if the test is specifically for that method, or if the use of the method is coincidental.

  7. They combine too much behavior into a single test. Ruby includes very complex behaviors and there are many different aspects of those behaviors that may depend, for instance, on which parameters are passed or the kind of parameters passed. This example of Range#bsearch tests with Float values spans 49 lines and is hardly the most confusing test in MRI. Or this test, titled test_misc_0, that asserts something about Arrays. It is extremely difficult to identify the key boundaries of Ruby behavior and implement them correctly.

    Another example is Process.spawn, which has at least 43 distinct forms of passing parameters. It is very difficult to understand that complexity from these MRI tests.

  8. They use ad hoc facilities for setting up state and sub-processes. There are three basic kinds of specs one can write for Ruby: 1) a value computed with no side effects, 2) state change (side effects) internal to the Ruby process, and 3) state change (side effects) external to the Ruby process. The first kind of specs are the simplest and easiest to write. The latter two often require starting a new Ruby process to execute the test. The MRI tests use various different ways to do this, including IO.popen, Process.spawn, system, etc. There are also many different ways for setting up the state for the test.

  9. They are often incomprehensible. This issue of the difficultly of understanding the tests is part of several of the previous points, but it stands on its own as well. It is often not clear at all what behavior the test attempts to show. This makes implementing Ruby much more difficult. For example, what exactly is the behavior here?

  10. They are incomplete. Ruby is complex, so it's not completely surprising that testing Ruby is hard. However, this is MRI. They are defining the behavior of Ruby. Complexity is no excuse whatsoever for incomplete tests. The consequence of incomplete tests is serious. Recently, Rubinius implemented Thread::Backtrace::Location from the documentation in MRI. There are no MRI tests for Thread::Backtrace::Location#path, a method that Rails 4.1 started depending on to initialize Rails. The MRI implementation appears to have several bugs in it. This resulted in a Rails issue, a Rubinius issue, a bunch of time wasted and all our Rails 4.1 applications being broken and requiring a monkey patch while waiting almost a month for Rails 4.1.9 to be released, which still hasn't happened. It is unreasonable that the developers defining Ruby features are not writing tests for them.

The facilities and structure I built for RubySpec address all of these significant problems with MRI tests, except for completeness, which is impossible for RubySpec to solve while MRI continues to change the definition of Ruby without adequately specifying the behavior.

This issue of completeness is extremely difficult given the complex behaviors of Ruby in areas like processing keyword arguments. When calling a method, the object passed as a parameter that is considered to be a candidate for keyword arguments is determined by hidden semantics. A single object will sometimes be split into two separate Hashes, at other times an exception will be raised. The semantics are extremely complex. While writing numerous RubySpecs for this, I encountered several bugs and filed issues with MRI. Some of those bugs persist in Ruby 2.0 and 2.1.

The existing MRI tests are simply inadequate to define the semantics of Ruby and require significantly greater cost than necessary for other implementations to use to implement Ruby behavior.

Developers And Businesses Suffer From Poor Ruby Quality

I think it's unreasonable for the MRI developers not to use and contribute to RubySpec. I don't think it's acceptable to release a stable version that segfaults when running an easily accessible and valuable test suite. Businesses who depend on Ruby and employ developers to write Ruby deserve a more mature, more rigorous design and quality assurance processes.

I have personally discussed RubySpec with Matz on multiple occasions. I have sat with Matz and other MRI developers to discuss RubySpec. I have advocated for using RubySpec in my talk, Toward a Design for Ruby, in the feature request A Ruby Design Process on MRI's feature / bug tracker, in my blog posts explaining my proposed design process (A Ruby Design Process, A Ruby Design Process - Talking Points), and in my petition to adopt the proposed design process.

RubySpec has existed for almost eight years, but has clearly failed to suit the needs of MRI developers. That's disappointing to me, but I no longer view it as a problem I can help solve.

The Future Is Bright

Ultimately, I've made the decision to end RubySpec for the benefit of Rubinius as a project and to support current and future contributors.

There is a significant opportunity cost for Rubinius in supporting RubySpec. We have the ability to experiment with even better approaches to specifying Rubinius features. For example, a literate programming style that combines Rubinius documentation and code examples to serve as tests. Or a custom language that makes the specs easier to write and understand.

Attempting either of these approaches is untenable if broad compatibility across implementations is a requirement. Moreover, Rubinius needs to be free to prioritize efforts toward things that benefit Rubinius, just like all the other Ruby implementations have done.

I am more excited about Rubinius than I have been since early 2007. We continue to write very high quality specifications of the Ruby behavior we support, and all the new features that are being created. RubySpec was born in Rubinius out of a desire for the best Ruby support we could create. Whatever approach we take, that goal is deeply embedded in Rubinius.

Rubinius is getting better every day. It's the easiest way to transition your applications from MRI to a system with a just-in-time machine code compiler, accurate generational garbage collector, an evolving set of comprehensive code analysis tools, and concurrency support for all those CPU cores we have these days. If those things are important to you, we'd love to hear from you.


While the decision to end RubySpec was mine alone and not everyone fully agrees with it, I received tremendously helpful and generous feedback from many people. Special thanks to Chad Slaughter, Tom Mornini, Sophia Shao, Jesse Cooke, Yorick Peterse, Gerlando Piro, Giles Bowkett, Kurt Sussman and Joe Mastey.

Rubinius Metrics meet InfluxDB

A little bit of background

Along with the release of 2.3.0 an exciting feature landed: the basic infrastructure for always-on metrics of Rubinius subsystems. And after reading the 2.3.0 release notes I just fired my IRB session and tried:

=> #<Rubinius::Metrics::Data:0x8c94>

Sweet! Immediately went back to the release notes and again, tried to find how could I access the metrics inside that object:

=> {:""=>0, :""=>0, 
:""=>0, :""=>0, 
:""=>0, :""=>0, 
:"jit.methods.queued"=>0, :""=>0, 
:"jit.methods.compiled"=>0, :""=>0,
:""=>0, :""=>0, 
:""=>0, :""=>0, 
:"memory.young.objects.current"=>0, ...}

And voila! JIT, GC, IO Subsystem, Memory, OS Signals (and more!) stats about the current Rubinius VM running my IRB at my fingertips. I was delighted to see there was so much data accessible in near-real-time through a simple Ruby object. A quick switch back to the release notes revealed something even more promising: built-in support for emitting the metrics to StatsD (a simple daemon for stats aggregation). Are you thinking the same as me? Lets do some graphing!

Looking for a tool for graphing

Ironically, this step was the most difficult. After researching open source tools for graphing everything seemed to point in the same direction: Graphite. The plan was to have Rubinius connected to StatsD, then connect StatsD with Graphite to make some graphs. All of this seemed straightforward except that getting Graphite up and running was in practice, a pain, and after even trying the Docker Image I could not get it up and running. I'm lazy so I gave up on Graphite and started to look for other approaches.

Then, I remembered Brian talking about a time series database InfluxDB on the Rubinius IRC channel. I went to the site and totally fell in love with it, installation was easy, it was working out of the box and seemed that I was in the middle of a lucky strike because while reading InfluxDB documentation I saw a Grafana Dashboards link, followed the rabbit hole and there it was: "An open source, feature rich metrics dashboard and graph editor for Graphite, InfluxDB & OpenTSDB." That sounded like a plan: Rubinius -> StatsD -> InfluxDB -> Graphana. The "StatsD -> InfluxDb" part seemed a little bit scary but after some googling I've had found this handy StatsD backend. I had everything I could wish, lets do some X-RAYS to Rubinius!


Getting InfluxDB

I'm using Mac OS X so for me was matter of:

$ brew install influxdb

But you can check the installation section in the InfluxDB documentation for your operating system. The version used at the time of the writing is v0.8.7.

Starting InfluxDB

Again it can vary depending on the operating system but for Mac OS X you can start it like this:

$ ln -sfv /usr/local/opt/influxdb/*.plist ~/Library/LaunchAgents
$ launchctl load ~/Library/LaunchAgents/homebrew.mxcl.influxdb.plist

Creating the InfluxDB databases

As I mentioned before, all the metrics will be sent to InfluxDB by StatsD to one database (to keep things relatively simple) in our case the "rbx". One of the neat things of InfluxDB is it's admin UI listening at http://localhost:8083. You can read more about how to login into InfluxDB here but the default credentials are user: "root" and password: "root" and should be enough should go there and create a two new databases one named "rbx" and other named "grafana" as I explain later is used by Grafana to store the dashboards.


Getting StatsD

StatsD is a NodeJS package so it can be installed in your system with NPM:

If you don't have NPM installed on your system and you are on OS X you can install NPM (which will install NodeJS which comes with NPM bundled) with this:

$ brew install npm

If you are using another operating system you may find this helpful.

After you have NPM you can:

$ npm -g install statsd

Getting the InfluxDB StatsD backend:

$ npm install -g statsd-influxdb-backend

Creating a configuration file for StatsD

We need a bit of configuration tweaking, but you can use this one:

$ wget -O config.js

This config file has the InfluxDB settings such as enabling the InfluxDB backend, credentials (just use the default ones) and the InfluxDB database to which you will be forwarding the metrics ("rbx"). I've also enabled the debugging option for StatsD because, you know, is extremely difficult for a developer not knowing what's happening ;).

Starting StatsD and testing it

You can now:

$ statsd config.js 

You should see something like:

$ 9 Dec 13:11:37 - reading config file: config.js
$ 9 Dec 13:11:37 - DEBUG: Loading server: ./servers/udp
$ 9 Dec 13:11:37 - server is up
$ 9 Dec 13:11:37 - DEBUG: Loading backend: ./backends/graphite
$ 9 Dec 13:11:37 - DEBUG: Loading backend: statsd-influxdb-backend

Also, you can run a small test to StatsD like this:

$ echo "foo:1|c" | nc -u -w0 8125

After that you can just go to InfluxDB UI follow the "Explore Data" link at the right of the "rbx" database and run the "list series" query. Hopefully, if everything is OK you should be able to see you "foo.counter" there.


Getting Grafana

To get Grafana just go to download the compressed file, place it in the directory of your preference and you can either put the files into the public static files directory of your web server or as I my case just open index.html with your web browser.

For the lazy ones:

$ wget
$ unzip

Configuring Grafana

Grafana has the concept of Datasources from where it pulls the data. It ships with a sample config file we could tweak but as we are lazy here is the one I'm using in my machine. There is nothing esoteric in there we just define two sources: one "rbx" pointing to our "rbx" InfluxDB database (where StatsD is dropping the metrics (remember? :)) and other called "grafana" used internally by Grafana to persist the dashboards.

So the process would be:

$ cd grafana-1.9.0
$ wget -O config.js
$ open index.html

Creating some dashboards

You are almost done! Now you have to go to the Grafana documentation and learn how to make some graphing! (I'm kidding, There is a sample dashboard for seeing Rubinius metrics your only work is to import this dashboard in Grafana (by clicking the small folder 'Open' button in the top right section of the UI, then clicking in 'Import') and start looking at those amazing graphs.

Something like this:

$ wget -O RBX-DASH

Running Rubinius with StatsD support

There will be nothing to draw if you don't start the StatsD metrics emitting in your Rubinius process. You can do this by passing some -X flags to rbx command.

For example (and the most simple would be to fire up an IRB sesion):

$ RBXOPT=" -Xsystem.metrics.statsd.server=" rbx

Or if you are intrepid just clone the (Rubinius Benchmarks) repo and run some benchmarks and see the X-RAYS in action \o/:

$ git clone
$ cd rubinius-benchmark
$ RBXOPT=" -Xsystem.metrics.statsd.server=" ./bin/benchmark -t /Users/goyox86/.rubies/rbx/bin/rbx core

Or even if you are more intrepid start any of your rails apps like this:

$ RBXOPT=" -Xsystem.metrics.statsd.server=" rbx -S bundle exec rails server


There is a bright future coming for all logging, monitoring and analysis for Rubinius, the fact that we now support StatsD directly opens a whole world of possibilities since we can send metrics to anything supported by StatsD, allowing you to X-RAY your Rubinius processes. Imagine sending your Rubinius metrics to NewRelic, Skylight out of the box? Please don't hesitate in sharing your ideas about this even requesting built-in support for other metrics or other targets. Happy Graphing!

Rubinius 1.3+

Now that Brian has talked about Rubinius 3.0, let's roll back several versions and talk about Rubinius 1.3+, the MRI 1.8.7-compatible Rubinius.

While MRI 1.8.7 was EOL'd in 2013 with extended maintenance ending this past summer, there are still enterprise apps on 1.8.7 that we would like to support. In order to do so, we are maintaining Rubinius 1.3+ in parallel with Rubinius 2.

However, as we've continued work on Rubinius 1.3+, we've had to make a decision on how to handle those bugs still remaining in MRI 1.8.7.


Many bugs reported for MRI 1.8.7+ were never fixed or back-ported, which means that buggy behavior still exists. This leads to an unfortunate side-effect: code that depends upon those bugs.

For Rubinius 1.3+, that meant we could either:

We have chosen the latter: to implement correct Ruby behavior in Rubinius 1.3+ and fix application bugs.

Apps that depend upon buggy behavior should fix those dependencies. Leaving them in is harmful to future development, as it locks the codebase down to whatever architecture preserves those bugs, encouraging stagnation and complicating app maintenance.

Our hope is that by patching those bugs in Rubinius 1.3+ and providing useful tooling, developers will be able to identify bug dependencies in their codebase and remove them.


Beyond patching such bugs in Rubinius 1.3+, we will also be updating the specs in the Rubinius and RubySpec 1.8.7 branches to reflect our decision: ruby_bug guards will be removed, or commented out (to provide historical documentation).

This means that this set of specs will fail when run against MRI 1.8.7 -- as they test for behavior that was never patched in MRI 1.8.7.

Looking Forwards

While we certainly aren't encouraging the continued existence of 1.8.7 apps, we understand that they still exist, and that moving away from them can have significant costs. With Rubinius 1.3+, we hope to provide the support those apps need to better their codebase, improve their maintainability, and eventually migrate to current Ruby versions.

Rubinius 3.0 - Part 5: The Language

This is the last post in the Rubinius 3.0 series. We started by talking about the Rubinius Team. In part 2, we looked at the development process and how we more effectively deliver features to you. Following that, we explored the new instruction set, which is the foundation for running Ruby code. And yesterday, we looked at the Rubinius system and integrated tools. Today, I'll talk about changes that I'm introducing to the language.

I mentioned that these posts were in order of importance. If we arrange the posts as in the figure below, we see that the Team and community form the foundation on which the development and delivery process is built. This gives us a basis for making good technical decisions, like the new Rubinius instruction set. In turn, that enables us to build more powerful tools for developers.

Finally, the language comes at the top. It's the least important piece, really representing the icing on the cake. The language is still important. After all, a cake without icing is a poor cake. However, the language needs to be seen in context and in proper relation to the rest of the system.

         /        Language        \
       /       System & Tools       \
     /        Instruction Set         \
   /   Development & Delivery Process   \
 /            Team & Community            \

Now that we see where language fits in, we can investigate it further. Is this chocolate icing or vanilla icing?

Everything Is An Object

There's no gentle way to say this, you've been misled about Ruby.

Everything is not an object, and objects are not everything.

I admit that I suffered this delusion that everything is an object for a long time as well, and I earnestly tried to convince others that this was true. This falsehood is causing us a lot of problems. Even worse, it's preventing us from fully benefiting from objects.

There's an important reason we use objects, and that's the reason objects are useful. That may sound circular, but it's not. Objects are useful because of the problems they help us solve. They are not abstractly useful independent of any context. In fact, when we misuse objects, they aren't very helpful.

In The Power of Interoperability: Why Objects Are Inevitable, the author suggests the following reason why objects are useful when writing programs. He actually goes further than useful and suggests that either objects or something that simulates objects are inevitable.

Object-oriented programming is successful in part because its key technical characteristic, dynamic dispatch, is essential to fulfilling the requirement for interoperable extension that is at the core of many high-value modern software frameworks and ecosystems.

Objects are useful because they allow pieces of a system to inter-operate while they evolve under different rates of change by encapsulating information such that coupling (ie dependencies, brittleness) is reduced to a minimum.

The idea of interoperability includes the ideas of interface, boundary, and integration. Objects inter-operate at their boundaries, which define the interface with other objects. To integrate well, those interfaces must match up well enough to do useful work. At the same time, where they do not match up must not interfere with doing useful work.

It's important to understand that "interoperability" is merely a fancy way of saying, to share the work. Everything here is about sharing the work. If A sends a message to B, A is relying on B to do the work specified by the message. A could just as well do all the work itself, but that would be wasteful if B already does exactly what A needs.

With objects, we have two ways of sharing work. When we inherit from a class in Ruby, or include a module, we are sharing work by being a kind-of the thing that we inherit from. When we delegate work to another object that we reference, we are using composition, or has-a relationship to share work.

This leads us to a new definition of "Object":

an Object is something you can send a message, not a thing you reference (i.e. hold onto in a variable or data structure).

This definition gives a simple, unambiguous way to identify objects: "Can I send this a message?". If the answer is No, it's not an object. The focus of message is on communication and behavior, not thingness. This is even more important when we consider proxies. The actual thing I send the message to is unimportant. Insisting that it be a particular thing causes endless pain in programs. The proxy may handle the message or delegate it, and this decoupling and encapsulation of information is essential for interoperability.

Inheritance and composition give us what I call the family and friends model of sharing work. But there's an important dimension missing from this model.

Everything Is Not An Object

We have seen that objects provide two things: a way to share work, and a means of inter-operating by encapsulating information. There is more than one way to share work. We aren't all friends and family.

At this moment, writing this post, I'm sitting at my desk in my apartment in Chicago in Illinois in the USA on earth, and so on. This boxes in boxes containment relationship is essentially about context. In the context of my kitchen, I may cook or clean dishes. I do not typically clean dishes in my bedroom. I'm the same person in each of these contexts, but my behavior may be substantially different. Of course, some behaviors may be the same. Whether I'm cleaning dishes or sleeping, I certainly hope I'm still breathing.

We are familiar with this containment relationship in Ruby. In the following code, the method name returns the value of the constant X. Ruby finds the constant by looking up a chain of boxes that in Rubinius are represented by the ConstantScope objects.

class A
  X = "Ruby"

  def name

There is a need in Ruby to better express this sort of relationship. We need objects to be able to share work without relying solely on friends and family. It turns out, there's a simple idea that provides this very ability: they're called functions. In Ruby, we've been so busy thinking that objects and functions are opposites that we didn't realize they are mostly complementary. I would say objects and functions are orthogonal, serving different and independent purposes.


As we see with the constant search example above, containing lexical scopes exist in Ruby. In Rubinius, they are objects you can reference and send messages to. The lexical scopes provide a mechanism to relate objects and functions.

It turns out that Ruby's syntax is just flexible enough to permit us to use a syntax for functions that is reasonably consistent with the syntax for methods (except for the ugly do on the end):

class A
  fun polynomial(a, b, c) do
    a + b * c

  def compute
    polynomial 2, 4, 9

Just like the constant search for X above, the compute method can refer to the polynomial function because it exists in the method's containing lexical scope.

This Boundaries talk by Gary Bernhardt is the best illustration of these ideas that I know of right now. I highly recommend watching it. I'm not going into depth about functions today, other than introducing them. They are a very well-understood area of computation and they are extremely useful. In the coming weeks, I'll write more about how we are using them to rewrite the Ruby core library in Rubinius 3.0

Gradual Types For Functions

Related to functions are the concept of types. Types are a mechanism to ensure that for any "well-typed" expression, we are guaranteed that the result of evaluating the expression will be well-typed, and that evaluation will succeed. This idea is referred to as progress and preservation. Types are an extremely powerful tool, when properly applied.

Ruby's syntax is also flexible enough to permit adding type annotations like the following:

class A
  fun polynomial(a: int64, b: int64, c: int64) do
    a + b * c

Again, I'm not going into detail about types in this post. However, Rubinius 3.0 will include gradual typing for functions. The field of gradual typing is experiencing growing interest, as illustrated by this recent talk by Philip Wadler at Galois. We will apply the best current research on gradual typing in Rubinius 3.0.

There's one aspect of gradual typing that I do want to make clear: Objects are the absolute worst place to put types because types conflict with the reason objects are useful.

Objects need to provide the minimum interface to inter-operate. In other words, objects need to be as isolated as possible. Objects also need to have the ability to be incomplete. This incompleteness, or partial completeness, is not just defined by something missing. The partial-ness provides a space for behavior to evolve in a way that integrates with the already existing behavior.

In Rubinius, we have no intention to add typing to objects. Down that road awaits infinite pain and suffering.

Multiple Dispatch

There's one final idea I want to present today: the idea of multiple dispatch for functions and methods. For methods, dispatch (or sending a message), is now done only by considering the kind of object that the message receiver represents. Unfortunately, this forces a single method body to include logic for any number and kinds of objects that can be passed as parameters.

For example, Array#[] or element reference, can take different numbers and kinds of arguments. It might receive a single Fixnum, two Fixnums, a Range, an Object that responds to #to_int. I'd have to go look at the RubySpecs to know if I've covered the cases. This method is not unusual in the complexity of its interface. There are worse.

IO.popen is an egregious example. It has at least 43 possible combination of arguments. Some of those arguments can partially overlap and the semantics when they do are essentially undefined. The APIs in the Ruby core library are embarrassingly messy. It's obvious that we need additional support in the language to handle the complexity without a mound of the proverbial balls of mud.

In multiple dispatch, the receiver, number of arguments, and kinds of objects passed as parameters are all considered when finding the correct method to handle the message that was sent.

By using multiple dispatch, we can write each method to handle the specific work that it needs to perform based on the kinds of objects it receives and correctly factor the shared work into a separate method. This improves our ability to comprehend the code while also improving the performance of the system as well.

In Rubinius 3.0, we are implementing multiple dispatch and using it to rewrite the Ruby core classes. Following the example above, we might define Array#[] as follows:

class Array
  def [](index=Fixnum())
    # return element at index

  def [](index=Fixnum(), num=Fixnum())
    # return num elements starting at index

  def [](range=Range())
    # return elements from range.start to range.end

  def [](index)
    # coerce index and dispatch

  def [](index, num)
    # coerce index, num and dispatch

The compiler that is used to compile the Rubinius 3.0 kernel will understand multiple dispatch, so successive method definitions add to, rather than overwrite, the set of methods that can handle a message.

A note about the syntax above: def [](index=Fixnum) defines a method that takes a single parameter that is a kind-of Fixnum. The "default argument" syntax in Ruby is the only thing that permits expressing this simply. To distinguish this positional argument from a default argument, note that Fixnum() has no value in parenthesis. In contrast: def [](index=Fixnum(123)) defines a single default argument with value 123. Passing a parameter that is a kind-of Fixnum will match, and if no parameter is passed, the value 123 will be used.

There's an additional aspect of the Fixnum() syntax that I want to highlight. It looks like a function or operation and that's important. These are not "types". They are match-syntax for a kind of object and also reflect an operation that would coerce an arbitrary Object instance into an object of the specified kind. In the case of Fixnum() or Integer(), it would be the #to_int method.

To summarize, we have these things in Rubinius 3.0: functions, gradual types for functions, and multiple dispatch for methods and functions.

This Is Not Rubinius X

I want to emphasize that this is not Rubinius X.

Rubinius X includes these ideas but has many additional features. My objective for introducing these features into Rubinius 3.0 is to massively reduce the complexity of the current implementation of Ruby, significantly improve the performance of Ruby, and build the foundation for Rubinius X (and other languages) to integrate with Ruby.

The ideas explained in the other posts about the new instruction set and the tools we are building are all focused on making it possible to transition existing applications to Rubinius X without paying the cost of disruptive rewrites. With this in mind, here's one more thing.

A New Machine

We are living at a time where active experimentation with languages is escaping academia and having a major commercial impact. There was a dreary day when it looked like Java, C#, and C++ would dominate programming. Thankfully, that's no longer the case. Very good new languages like Rust and Swift are commercially viable, and "experimental" languages like Haskell and Idris are making their way into industry. Very exciting!

While working on Rubinius, we have learned a lot about features that facilitate language development. However, underneath, we have been biased toward many features in Ruby. This has limited the utility of Rubinius in building languages with features that don't significantly overlap those in Ruby. However, as I've described in this post, Ruby's semantics are too limited to provide a language that is useful for many critical programming tasks today.

Accordingly, we are extracting a more useful language platform out of Rubinius as the Titanius system. With the function support I'm adding in Rubinius 3.0, we will use dynamic (Ruby-like), static (C-like), and complex (Idris-like) semantics to refine our design and implementation. We want to ensure that the languages are able to maximally reuse existing components while still having the ability to express their own semantics in a fundamental way.

I hope you have enjoyed this series on Rubinius 3.0 and that it has given you a view into a much more useful and refined Ruby language.

There are so many hard problems that we need to solve. To be happy writing code, the language must solve the problems we have. Then we can help people using our products to be happy, too. Then businesses can be profitable by building those products that we are happy making. We can't avoid understanding this deeply and we must take responsibility for it. I hope you'll join us on this journey.

I want to thank the following people: Chad Slaughter for entertaining endless conversations about the purpose of programming languages and challenging my ideas. Yehuda Katz for planting the seed about functions. Brian T. Rice for trying to convince me that multiple dispatch was useful even if it took six years to see it. Joe Mastey and Gerlando Piro for review and feedback, some of it on these topics going back more than a year. The Rubinius Team, Sophia, Jesse, Valerie, Stacy, and Yorick, for reviewing and putting up with my last-minute requests.

Rubinius 3.0 - Part 4: The System & Tools

Yesterday, I presented a look into the new instruction set in Rubinius 3.0. Today, I want to talk about changes to some of the bigger systems, like the garbage collector, just-in-time compiler, and the Rubinius Console. This will be the shortest post but we'll be writing a lot more about these parts in the coming weeks.

I hope you're enjoying these posts and finding them inspiring. I'm excited to be bringing these ideas to you. In case you've been thinking about contributing or joining the Rubinius Team but are unsure if you want a lot of public attention, I wanted to share a book I've been reading: Invisibles: The Power of Anonymous Work in an Age of Relentless Self-Promotion.

To summarize, we're not recruiting rock stars. If you are one, that's great. For the rest of us, the online world can be very hostile at times, especially with the harassment of women and minorities that we are seeing on a daily basis. We must work hard to end these harmful actions, and at the same time give people safe places to work from. If you'd like to contribute but stay anonymous, we completely support you.

Rubinius System

The phrase "virtual machine" is most often used to refer to a system like Rubinius whose primary purpose is execute a program written in some programming language. The phrase is quite vague. The main subsystems in Rubinius are the garbage collector, the just-in-time compiler, and instruction interpreter (which we discussed in Part 3), and code that coordinates these components and starts and stops native threads. All of these are quite common in a system like Rubinius.

In this post, we also look at a set of tools that are deeply integrated into the rest of the Rubinius components. Sometimes these sort of tools are considered an after-thought. In Rubinius 3.0, we are approaching these tools as fundamental parts of the system.

Garbage Collection

There are two changes coming to the Rubinius garbage collector. It will move toward a fully concurrent implementation and we'll be working on implementing near-realtime guarantees on any critical pauses. Also, we'll add a new type of memory structure that I'm calling a "functional object".

Right now in Rubinius, every object basically looks like the schematic below:

|       Object header          |
|       Object class           |
|       Instance variables     |
|       ...                    |
|       Optional Object        |
|       reference fields       |
|       Object-specific        |
|       data                   |

In Rubinius 3.0, there is an additional type of object:

|       Header                 |
|       Optional Object        |
|       reference fields       |
|       Data                   |

We are already using the second kind of object in Rubinius now, primarily for Bignum memory management, but we are formalizing and expanding our use of it to many other contexts.

Just-in-Time Compiler

The Rubinius just-in-time compiler processes bytecode to generate native machine code "functions" that execute the methods in your Ruby program. The JIT leverages the LLVM compiler library to generate the machine code. Because most of the JIT is currently implemented in C++ to easily interface with LLVM libraries, it is distant from Ruby and not easy to work with.

The most important change for the Rubinius JIT is that we'll move it as deeply into Ruby as possible. The extremely difficult aspects of generating machine code, like register allocation, instruction selection, and instruction scheduling, will still be handled by LLVM.

The other changes to the JIT are architectural. Right now, when a method is called a lot, it will eventually be queued for the JIT to compile. During compilation, the JIT will use runtime type information to combine (or inline) not just the single method itself, but a chain of methods along the call path. There are several problems with this approach that are addressed by the changes below:

  1. Multi-phase: A running program does not have exactly the same behavior at every point during its execution. Recognizing that programs have distinct phases of execution, the Rubinius JIT will adjust decisions to be more appropriate for that phase of the program.
  2. Multi-paradigm: There are two very broad categories of JIT compiler: one essentially compiles a complete method at a time while the other compiles a specific execution trace. The Rubinius JIT is currently a method JIT. In some cases, especially hot loops, a tracing JIT may be more appropriate.
  3. Multi-faceted: There is more than one way to improve performance of a piece of code, but right now the Rubinius JIT only has one way to do this. A multi-faceted JIT, on the other hand, will use many different approaches. Some methods may be transformed at the bytecode level, with new methods written from optimizing the original bytecode. Or methods may be split into many different ones depending on the types of values they see. The multi-faceted approach is not a set of tiers where higher tiers are better. It's the idea of better tailoring the kinds of optimizations to the features in the code.

The feature of the new Rubinius JIT that I'm most excited about is the JIT planner. Similar to a query planner in an RDBMS, the JIT planner will provide a way to record and analyze JIT decisions.


There are many moving pieces in Rubinius. Making sense of how they are performing and interacting is important. To support this, Rubinius includes a number of low-cost counters that capture key metrics of the Rubinius subsystems. Metrics make it possible to observe the actual effects of system changes in the context of production code.

The Rubinius metrics subsystem currently has a built-in emitter for StatsD. Other emitters can be provided if they are useful. Already, Yorick is sending Rubinius metrics to New Relic's Insights API to monitor an application's behavior.

This has tremendous value to us as we develop Rubinius. Many times we are asked about an issue in Rubinius and when we inquire about the application source code, we're told it's proprietary. We understand the need for protecting intellectual property, but it severely limits our ability to investigate. The metrics output makes it possible to share non-critical application information in a way that will help us improve our ability to address issues that you encounter with Rubinius.

User Interface

The user interface is not something we often discuss about programming languages. As language designers, we may think and talk about it. But I have seen far fewer such discussions between language designers and language users, and almost no serious, extensive studies of usability in language design. (If you have references, please send them!)

One common discussion of usability that we do hear about in programming is the Unix tools philosophy, or the idea of doing one thing well. A simple program that does one thing well can be composed with other programs to do more complex things. I don't object to these ideas, but there's another side of the story no one talks about: What is the system underneath that makes it possible to pipe output from one program to another?

In art we may talk about figure and ground, and in building tools we must consider both the pieces the user interacts with, as well as the system behind those pieces. Over-emphasize the pieces in front and the user gets a bag of disparate fancy things that are collectively junk. Over-emphasize the system underneath, and the user gets a rigid, unwieldy block of granite that is equally unusable.

The Rubinius::Console is a set of tools combined with a coherent, integrated, systematic set of features that enable the tools to perform and coordinate well. I'll briefly talk about the main components below.


All the tools are built on the foundation of the REPL, or little-c console. A REPL generally takes commands, executes them, and displays the results. All of the tools here are part of the Rubinius::Console component. I want to give a brief introduction to each one for now, but we'll be writing a lot more about them soon.


The inspector is a collection of features that enable tracing through the execution of a program and inspecting the program state. It can show the value of local variables, what methods are currently executing, and other aspects of the running program. These features are usually included in a separate tool called a debugger. We think these features should be available at any time, whether running in development mode on your local machine, or in production mode on a remote server.


When investigating program behavior, sometimes it is helpful to measure how long a particular piece of code takes to run. Typically, this requires setting up a separate run with separate tools to do a benchmark. We think the essence of a benchmark is simply measurement and it should be available at any time.

Relative Measurements

Another type of measurement is the relative measurement of multiple components, usually the chain of methods invoked to perform some computation. This is usually called a profile and the focus is on the relationship between the measurements so that the most costly ones can be improved.


A running program has both an object graph and an execution graph. The object graph is the relationship between all the objects in the system. The execution graph includes all the call paths that have been run during the program execution. The execution graph is not just the current stack of methods executing.

The analysis tools are available to investigate allocation issues or unwanted retention of references to objects, something often referred to as a memory leak. They can also investigate the execution graph to find relationships between code that are not visible in the source code due to Ruby's dynamic types.


While a Ruby program is running, an enormous amount of important and useful data is generated in Rubinius. When the program exits, almost all that data is dropped on the floor. The CodeDB will preserve that data, enabling it to be used at many points in the lifetime of a program, from the first line written to inspection of a problem in a running production system.

The CodeDB is more of a functional description than a specific component or piece of code. In Rubinius today, we store the bytecode from compiling Ruby code and read from this cache instead of recompiling the Ruby code. However, we still load all the code regardless of whether it is used. In Rubinius 3.0, we will only load code that is used, which will improve load time and reduce memory usage from storing unused code.

As we covered in the last post, the bytecode is merely one representation of Ruby. The CodeDB will enable us to store many representations of Ruby code across many invocations of the program, and potentially across many computers. The representations of Ruby code combined with the rich data created by running programs gives us the foundation for even more exciting tools. One of these may be a refactoring editor, which seems to be the holy grail of every object-oriented programmer. We think there are even more interesting tools than automated refactoring and are excited to tell you more about them.

Tomorrow, we will finally tie some of these pieces together in Rubinius 3.0 - Part 5: The Language.


I want to thank the following reviewers: Chad Slaughter, Joe Mastey, and the Rubinius Team, Sophia, Jesse, Valerie, Stacy, and Yorick.