Rails accepted into Google Summer of Code

Rails was just accepted into Google Summer of Code, one of the projects on the list is improving rubybench which myself and @schneems will be mentoring.

@system has already put a tremendous amount of effort into the project, he has a guaranteed spot for the summer. There may be another spot or two more open and we need to start thinking about what the project will include and how we can measure success.

If you are interested in getting involved the best thing to do is to start hacking on rubybench you do not need to wait till summer for that.

Top level there are a few big goals I would like us to have:

  • Automatically detect performance regressions in Rails and report back to the Rails team

    • Work on improving micro and macro Rails benchmarks
  • Benchmark other Ruby implementations.

    • At the moment we only test MRI. When it comes to “other implementation” testing JRuby and JRuby 9k are the highest priority followed by Rubinius and others.
  • Improve RubyBench architecture to allow for very simple backfilling of benchmarks, create an admin UI for RubyBench so we can track how stuff is progressing, what is running and where?

  • Improve documentation and front end, we do need some designer love.

Any other ideas? Keep in mind this is just a brain dump.

Working on notifications and integrations with Ruby and Rails both could be huge to the community. I imagine a day when we might report speed and memory benchmarks directly on github issues or at least report back to the campfire room after a commit is merged.

I’m also interested in easy ways to get more applications reporting benchmarks. There’s a number of open source apps such as rubygems.org and codetriage.com that we can leverage. We could explicitly hunting out actively developed OSS rails apps and manually adding them to the project. Alternatively I’m wondering if we could get meaningful information from travis test runs or somehow utilize a third party resource for running the benchmarks to allow focus on collecting and reporting results in meaningful ways.

I’m going to reach out to Koichi and see if he has any special requests from this project.

This is an interesting area to explore. We picked Discourse initially because it already had benchmark scripts out of the box. I think there was a discussion on basecamp previously if we should maintain a custom Rails application and add scenarios for a particular area we’re testing for. This method will give us more control (what Gems to include, what to seed, etc…). At the same time, I’ve spoke to @kirs and he might be implementing http://railsperf.evilmartians.io/overall into RubyBench but those are going to be micro benchmarks.

Ditto above. Would maintaining a custom Rails repository be better?

Some discussions made previously:

Instead of having GitHub send a hook to us to run the benchmark, I wonder if we can do it on Travis (Travis will send the hook and wait for the results) instead. However, that would mean our benchmarks will have to complete within a reasonable amount of time. Currently Ruby benchmarks are taking 15 odd minutes.

I asked what Koichi might want. He said notifications would be nice. He also said.

I think one idea. Maybe there are no support for adding new benchmark.
Adding new benchmark and measure with “past” interpreters, at least
released rubies can be help.

Basically I think if they find some code that isn’t performant and they “fix” it he wants to be able to add that as a benchmark and see how it performs on older versions. Similar to this https://gist.github.com/ko1/40110a3d951c19ed6979

Totally, we need a clean and efficient way of backfilling a benchmark.

We have this right now. Once a new benchmark is added to GitHub - ruby-bench/ruby-bench-suite: Benchmark suite for RubyBench.org, we can run only that benchmark for all releases and commits.

Reading @ko1’s mind I think what he is asking for is a place to

  1. Enter a benchmark into a textbox
  2. Click submit
  3. Have it, as quickly as possible, give him a graph

Clearly an advanced user / throttled feature but may be interesting and VERY useful to Ruby core devs.

Regardless we need a heavily optimised process for backfilling benchmarks, which ideally can farm the work out to multiple machines - and be smart about the order it runs the tests so it slowly gains fidelity (meaning run in this order for 100 commits being tested 1,50,100,25,75,12 …).

1 Like

Hi there!

Regarding running the benchmarks on other Ruby implementations, this may be interesting: https://github.com/jruby/bench9000.

1 Like


I’m very interested in this idea for this GSoC, but I have some questions before preparing submission documents.

I’ve seen that there is a Ruby benchmark site running different tests in Docker instances, right? So the aim of the project is to improve this system, adding compatibility with other Ruby implementations and simplify the way of backfilling benchmarks (maybe allowing web edition and submission of them, with something like an API to allow the docker images to download the last ones?).

I’ve experience in building Ruby on Rails applications (both backend and frontend), and experience using and building Docker images. Is there any other requisite needed?

Yes it sounds like you have a good understanding of the goals here, Docker/Ruby/Rails/Linux would cover you just fine.

Best way to see if this is a great fit for you is to try out a few pull requests, it will give you a good taste of the system.

Okey, thanks for the answer!

I’ll look at the pull requests and the structure of the project itself.
Also, I’ll send the project submission using Google Melange app, explaining how I would focus the task and my background on the subject with more detail.

I’ve seen that this project is under both Ruby and RoR organizations.
Which is the correct one?

I’ve sent a proposal to RoR organization, but I can send it to Ruby too if it’s needed.

Thanks for your time!

I think Rails is fine :smile: