This is a blog post about nothing; at most, wasting puffs of carbon.
Have you heard about "Free Functions" before? In a super-quick nutshell they are any function that is not a member function of a struct or class. We use them all of the time and it's likely you've written some without the intent of doing so.
My first formal introduction to the concept came from Klaus Igelberger's 2017 talk "Free your Functions!". If you have not watched it yet, I would recommend taking the time to listen to it. During the presentation, there was a claim made:
Writing code as free functions may be more performant.
But if you watch the talk in its entirety, there's no benchmark given.
I was intrigued because Klaus clearly explains the benefits of free functions. I do like them from a software design and use aspect. When it came to any hard measurement, there was nothing to back this statement. His presentation is a bit on the older side so it is likely the information Klaus was presenting was relevant at the time. But now, (8+) years later, it may no longer be.
As of late, I've been very interested in the performance metering of C++. So I thought this would be interesting to investigate. I am kind of putting Klaus on blast here, so I thought it was only fair to reach out and talk to him on this. I did correspond in email with Mr. Igelberger and let him read this before publishing.
The Hypothesis
Freeing a function should not have an impact on its performance.
I don't know about the inner workings of compilers, nor how their optimizers work. I'm more of a "try a change and measure it" sort of person. If unbounding a function from a class could improve the performance that's a low cost change to the code!
We'll benchmark free vs. member functions in two separate ways
- A smaller, more individual/atomic benchmark
 - A change in a larger application
 
I'm more of a fan of the latter since in the real world we are writing complex systems with many interconnecting components that can have knock-on effects with each other. But for completeness we'll do the smaller one too.
On this blog, all of the posts from the last five years have involved PSRayTracing. But I feel it's time to put that on the shelf. Instead it would be more practical to grab an existing project and modify their code to see if we can get a speed gain from freeing a function. We'll use Synfig for this.
A Simple Measurement
This is more in line with the benchmarking practices I always see elsewhere. We'll run this test across different CPUs, operating systems, compilers, and optimization flags. Let's say we have a simple mathematical vector structure with four data members:
struct Vec4
{
    double a = 0.0;
    double b = 0.0;
    double c = 0.0;
    double d = 0.0;
}
And we have some operations that can be performed on the vector:
We'll test these methods written three different ways:
- As a member function
 - Passing the structure as an argument
	
- The function is no longer a bound member, but technically "free" (though requires knowledge of the struct)
 
 - Passing the data members of the structure as function arguments
	
- This is the "properly freed" function
 
 
For example, this is what the function normalize() would look like with each style:
void normalize()
{
    const double dot_with_self = dot_product(*this);
    const double magnitude = sqrt(dot_with_self);
    a /= magnitude;
    b /= magnitude;
    c /= magnitude;
    d /= magnitude;
}
void free_normalize_pass_struct(Vec4 &v)
{
    const double dot_with_self = free_dot_product_pass_struct(v, v);
    const double magnitude = sqrt(dot_with_self);
    v.a /= magnitude;
    v.b /= magnitude;
    v.c /= magnitude;
    v.d /= magnitude;
}
void free_normalize_pass_args(double &v_a, double &v_b, double &v_c, double &v_d)
{
    const double dot_with_self = free_dot_product_pass_args(v_a, v_b, v_c, v_d, v_a, v_b, v_c, v_d);
    const double magnitude = sqrt(dot_with_self);
    v_a /= magnitude;
    v_b /= magnitude;
    v_c /= magnitude;
    v_d /= magnitude;
}
To benchmark this, we'll create a list of pseudo-random vectors (10 million), run it a few times (100), and then take down the runtimes of each method to compare. For the analysis, we'll compute the mean and median of these sets of runs. Between each three, we want to find which ran the fastest. If you wish to see the program it can be found in its entirety here: benchmark.cpp.
Different environments can yield different results. To be a bit more thorough, we'll compare on a few different platforms:
- Three CPUs: an 
Intel i7-10750H, anAMD Ryzen 9 6900HX, and anApple M4 - Three different operating systems: 
Windows 11 Home,Ubuntu 24.04, andmacOS Sequoia 15.6 - Three different compilers: 
GCC,clang, andMSVC 
Not all combinations are possible (e.g. no Apple M4 running Ubuntu 24.04 with MSVC generated code). Whatever was feasible was tested.
Compiler flags can also play a role. For even more zest in this test, optimization flags such as -O0, -O3, -Ofast, /Ot, /Ox have been specified. This post doesn't have the exhaustive list; check the Makefile to see.
Across these 4 dimensions, there are 48 different combinations and 12 functions to run so that's 576 sets of runs. It... took a while... If you wish to see all of the final data and analysis, it can be found in this Excel sheet and this Jupyter Notebook.
I don't want to bore you with any of the analysis code (see the Jupyter Notebook if you wish).
The key variable in it is ms_faster_treshold = 10.0. So if one style (e.g. "pass by args") is more performant than the other two, that style would need to be at least 10 milliseconds faster.
So What was Discovered?
There isn't much of a difference; barely. Out of those 576 run sets, only a whopping 8 had a significant performance difference. Here are all of them:
A lot of the rows aren't showing a large enough value for the time_ms_faster . Not even single digit improvements, a good chunk being only 0.34 ms or even 0.03 ms faster than the other two; which is not conclusively faster (or slower). Note that a "run set" can take anywhere from 150 ms to 300 ms to complete, which is why we're looking for a speedup of at least 10 ms
So in about 98% of the cases, whether the function was free or a member, had no significant performance difference.
Where there are gains it is (almost) exclusively coming from using clang on x86_64 Linux, at nearly all optimization levels, but only with normalize() where it was a free function using the "pass by args" style. Eyeballing the numbers, it's shaving off ~35 ms from runtimes between 185 ms ~ 205 ms. That is around a ~15% performance increase. It's actually significant! But keep in mind, this is only in 2% of the run sets.
From this benchmark, I think it might be fair to conclude this:
- Using free functions (with pass by args) can be more performant, but only with specific situations
 - Member vs. free in general doesn't have a performance gain or hit
 
This was a very limited benchmark; not my favorite. What happens in a larger application?
Larger Systems
Small benchmarks are fine, but they can be too "academic" or "clinical". In the sense when they are applied in a bigger program (i.e. "real world"), the results may be vastly different.
As mentioned before the previous posts on this site were concerning my pandemic pet project PSRayTracing. I think it's time to retire it and use something else. Synfig!
If you're wondering "Why Synfig?", let me elaborate:
- It's another C++ computer graphics (animation!) project
 - It's a bit more "real world practical" than my own ray tracer
 - Fully open source
 - It has a repo of nearly 700 test cases (.sif files)
 - Hacking on it (and around it) is quite easy
 - To automate testing was a sinch
 
The premise here is we will free a function used in the program and see if it leads to any significant change. The v.1.5.3 release (of Aug 2024) will be the version of code tested.
What to Free?
A method that is called a lot.
Freeing one function that is used sparsely makes no sense. I've contributed to the project a very long time ago, but I am not that familiar with the code base so I don't know its ins and outs. It wouldn't be fair to spelunk into the code, grab a random member function, free it, and then do the performance metering. There are tools that exist to find a good candidate.
Callgrind is perfect in our case. For those of you who are unfamiliar, it's part of the Valgrind suite. Its job is to generate call graphs, which can be used to see what functions are being called the most in an application. (Just note that this is very slow to run.)
CMake's build type must be set to RelWithDebInfo. This will compile the application with the -g -O2 flags. -g will add debugging information to the Synfig binaries. And -O2 gives a reasonable level of optimization. The final product would use CMake's Release mode (giving -O3), but we'll go back to that when running the benchmark.
Raw Callgrind output will look like this:
# callgrind format version: 1 creator: callgrind-3.24.0 pid: 3476 cmd: /home/ben/Projects/synfig/cmake-build/output/RelWithDebInfo/bin/synfig /home/ben/Projects/synfig-tests/rendering/sources/icons/tool_brush_icon.sif part: 1 desc: I1 cache: desc: D1 cache: desc: LL cache: desc: Timerange: Basic block 0 - 163286698 desc: Trigger: Program termination positions: line events: Ir summary: 800111840 ob=(235) /usr/lib/x86_64-linux-gnu/libopenmpt.so.0.4.4 fl=(801) ??? fn=(93066) 0x00000000000285a0 0 5 fn=(93056) 0x0000000000028610 0 9 cob=(4) ??? cfi=(179) ??? cfn=(93062) 0x000000000bfdc920 calls=1 0 0 1292 0 1 cfn=(93066) calls=1 0 0 5 0 3 ... 368 2 +13 6 cob=(4) cfi=(179) cfn=(67574) calls=1 0 * 24 fi=(1044) 3235 3 fi=(1045) 381 1 fi=(1044) 3235 1 fe=(1046) 2381 2 fi=(1044) 499 2 fe=(1046) totals: 800111840
Any output can easily be around 500K lines; I'm cutting a few out for brevity's sake. This seems like a bunch of gibberish, but it needs to be loaded into something like KCachegrind to make some more sense of the output.
Here we can see this synfig::surface<T>::reader_cook() function is called quite a bit. Maybe it's a good function to free? No. This was only checking a single file, we should be more thorough. Synfig's repo of test data has hundreds of files we can check. An instinct might be to grab a handful of files from this repository and check those. But we can do better: check it all.
Python is amazing. It has everything you need for automating any task. Running Callgrind on a directory tree of 680 files takes a while for a human to do. Python can automate that away for you. So I wrote a script that does that.
The next problem is that we have 680 files containing the Callgrind output. We're not going to load each one of these files in KCachegrind. That would be absurd.
Python is magic. We can easily combine all of this output to make a sort of "merged Callgrind report" from the entire test repo. So I wrote a script that does that.
This uses a slightly different tool by the name of callgrind_annotate, which essentially is a command line version of KCachegrind. It gives us what we need to know: which functions are called the most. Thus letting us hunt down what is the best candidate to free. One thing to note is that there's a lot of non-synfig functions in the callgrind output. For example, if you look at the above screenshot things like strcmp() pops up. We need to filter for only Synfig's code. And that's easily solved via grep:
cat combined_callgrind_output.txt | grep synfig
Which leads us to these candidate functions:
Instead of testing all three, we're only going to test freeing Color::clamped(). It's very simple and what I think is the most straightforward to liberate.
How to Free?
There are three different ways we can unbind this function from the Color class:
- Change it to a 
friendfunction - Set the data to 
public - Refactor the function to require the caller to pass in arguments
	
- this is the proper way
 
 
Similar to the smaller benchmark, I don't think there should be any performance difference from the baseline (no changes) to any of the three above methods. friend and public are included here for completeness, despite not being a fully correct freeing technique. I thought it would also be interesting to see if there are any unintended side effects that could affect performance.
How to Measure?
We can modify the recursive Callgrind script to instead render all of the sample Synfig files, along with taking down the runtime.
We're going to be more limited though:
- We'll only keep it to Intel & AMD Linux machines with GCC (14.2)
 - I don't believe building with MSVC works at the moment
 - Building with clang didn't work (see this ticket)
 - We'll only run each file 10 times. As some of the .sif files can take 30+ minutes to render
 
Results & Analysis
Sooo... This also took a while... Just shy of 78 hours. The Jupyter Notebook analysis is here, and the data measurements in this directory.
Altogether the runtimes taken are:
At the surface there are two observations:
friendfunctions andpublicdata members were slightly slower- On the Intel CPU, using "pass arguments" (the correct freeing method) was the only one that was actually faster
 
There is concern though: the percent difference from the baseline is only about half a percent. This isn't significant, it's fair to call this noise. I wouldn't feel confident in saying that free functions were a performance gain or hit here. Taking only 10 samples for each file, we would need to take around 25 before I could feel confident.
We did 10 runs of each Synfig file. What if we took the best runtime for each environment and then totaled that up?
These are similar results, as the duration_difference correlates to the above. But since there is not any significant speedup (beyond 1%), I have to say it's still noise.
Right now we are looking at the cumulative runtime of the entire test repo. What if we found certain test cases that were faster? There is a chance that a specific .sif file could render faster with a free function. Luckily we have all that data to find out if there are any instances where one method was more performant. Applying a minimum 2% faster threshold:
Wow. We have a 44% performance increase for a single case, followed by a bunch of 30% boosts!! That is massive! But... I am a little skeptical. We need to peek into the data. Looking at all of the runtimes for that no. 1 performer:
So... This is a little awkward. Doing 10 runs of each .sif file (for each combination), 9 times we have a measurement of 164 ms, but 1 time it took 114 ms. It doesn't feel right to call that a best case run. I'd call that a bad data point. It's possible there could be some others. Luckily there are ways we can throw out undesirable data. Z-Scores are a way to find outliers:
If we can find them, we can throw them out. Using a threshold of 2.0. This ends up tossing out about 5.3% of the data. Not an ideal, but something that I think we can still work with.
Now this is interesting. When we accumulate all of the runtimes with this cleaned data, each time the free function performs faster than the member function! A few are in the range of noise, but the others are significant. A 1.5% ~ 2.8% speedup! But let's take a look when we filter for the best case runtime:
Now we have a different story. All of the duration differences are back in the range of being noisy. From here, we need to take a deeper look at the (cleaned) data. I took a look at a few of the run sets, finding one in specific that is quite peculiar, no. 754 (which is file 075-ATF-skeleton-group.sif). Computing the Z-scores for this run set:
In these 10 data points:
- Half congregate around 
515 ms - The other half hover at 
464 ms - All of their Z-scores do not go above the threshold (2.0), so each one is kept in
 - All of the Z-scores are effectively the value 
-1.0and1.0 
This unfortunately means that the entire run set is bad data, which further cascades to the other tests that use the same file, thus requiring us to throw out 80 data points. Not good.
I tried adjusting to have an even more sensitive Z-score threshold (e.g. 1.5, 1.0, etc.) but that led me to throwing out a whooping 30% of the original data. If you play around with the Z-score you will find cases where the free functions were faster, then slower, then faster, then slower... I even tried out IQR as another means of removing bad data, but that also didn't work as desired.
With what we have right now, more testing would be required to make a definitive answer. But for Synfig, I don't find freeing functions concretely helping or hurting performance.
It's also likely that Synfig might not be the best "large integrated benchmark", seeing as we had some files with fluctuating runtimes. Maybe Blender is a better test bed. This is one of the issues of working with a code base you don't know that well. There could be something non-deterministic in the supplied test files. I don't thoroughly know this code; I'm making a wild guess here.
What has been done here is very much in the realm of microbenchmarking. It's hard to do, and difficult to find consistent results.
Conclusions & Thoughts on Free Functions
I don't think there is a practical performance benefit.
Architecturally I can see how free functions make sense. But if you're rewriting a function to free it, in hopes that it will make your code faster; it probably will not. It will be a waste of time that could introduce bugs into a working code base. Once again let me remind you, this is an article about nothing.
In the smaller benchmark we did find a significant performance increase, but I need to remind you that it was only being observed 2% of the time and in a very specific case (clang compiled code on Intel/Linux machines). But when we freed a (commonly called) member function in a larger application the performance bounced between being measured as faster or slower, all depending on how we looked at some data.
I don't want to stop others from writing free functions because there are no real performance benefits. I want them to write free functions if they think that is the better solution for their problems. It's very likely back in 2017 when Klaus first gave his talk that free functions were more performant than member functions. From that time to now, it is possible that compilers could have improved to optimize member functions better. As stated before, I'm not familiar with the internals of compilers and their under the hood advancements. I'm a very surface level C++ developer. I have to defer to people smarter than me on this matter.
This is a bit of an aside, but in one of my early jobs I had a higher-on-the-food-chain-coworker who one day wanted everyone to only write code (in Python) using functional paradigms. This was many years ago when Haskell and the ilk were much more in vogue. His claim was "functional programming is less buggy". He never provided any study, research, resource, document, database that backed up this claim. His reasons were vibes and appealing to the authority of Hacker News and that the URL had "medium.com" in it. This paradigm shift did nothing other than just introduce new problems. For example, taking a simple 3 line for-loop and then blowing it up to 7 line indecipherable list comprehension; this happened more than once. If you didn't fall in line, his solution was to berate and shame you in a public Slack channel and ignore your PRs. I'm glad I don't work with this guy anymore.
You might have thought that we proved absolutely nothing here and just wasted a bunch of electricity and time. I've said this twice already. But we've also discovered the inverse: if you want to free a function, you can rest assured there isn't a performance hit. We've also incidentally shown proof that public vs. private data, and friend functions, pass by struct, etc should not cause any performance changes.
I hope that you've watched Klaus' talk, because he does an excellent job of explaining the benefits of free functions. The big one for me is flexibility. I used to dabble in the Nim language a lot more in my past. I still miss it as it's really cute. It has Uniform Function Call Syntax. This makes any language way more ergonomic. Multiple times it has been proposed for C++, and was even talked about in Klaus' presentation. Herb Sutter's Cpp2/cppfront (which I think will be the next major evolution of the language,) has support for UFCS. We will not have any performance hit for this. Give it a try.
My only criticism I have of the old presentation is Klaus never provided a benchmark. I have been watching his talks for years and have always enjoyed them. One of his more recent talks from 2024 does include one. I would like to thank him for taking the time to email me back and forth over the past few months while I worked on this.
If anyone here is also looking for a project to brush up on their C++ skills, Synfig is great to check out. These people were very kind to me years ago when out of nowhere I just plopped in some tiny performance improvements and then didn't return for 5 years. They make it so damn easy to get set up. Blender gets a lot of attention, but I think this project needs some love too.
Since this is now my 4th try investigating performance claims in C++, if anyone has any suggestions on another topic they would like me to investigate, please reach out. I've made lots of scripts and tools in the past year+ to do these investigations. I'm wondering if there is any interest in creating a generic tool to do performance metering and test verification. I want to take a break and work on other projects in the near future though. So I won't be doing anything like this for a while.
Likewise, if anyone is interested in me profiling/investigating the performance of their code, reach out as well.
My main hope is that with these articles, we will stop making claims (i.e. performance improvements) without providing any measurements to back up our statements. We're making wild assertions but not testing them. This needs to stop.
If you just scrolled down here for the tl;dr: free functions don't have much of any performance difference from that of member functions.
Animation is one of my loves. Back when I was a second and third year university student, I had the oppourtunity to take some animation classes. They were more focused on things like The Principles of Animation instead of general film making. Back then I did upload two of my assignments that I was really proud of, but three years later, I've kind of summed up the courage to upload the majority of my work from those few classes. I'd like to share them with you; they are all below:
A Ball in a Box (Intro. to Animation Final) from Benjamin N. Summerton on Vimeo.
For RIT's "Intro. To Animation," course, there was a final assignment to show the instructor what we have learned. So I decided to make this little half a minute short called "A Ball in a Box."
I know there are quite a few sound syncing issues, I do apologize.
Dynamics of Musicality Assignment from Benjamin N. Summerton on Vimeo.
For week 3 (?) of RIT's "Intro. to Animation," course, we had to create an animation that would sync up to the playing music.
Bowling Ball Bounce (Assignment) from Benjamin N. Summerton on Vimeo.
A pencil test for an animation assignment I did a while back at RIT's SoFA. We had to do a ball bounce, but with a very heavy bowling ball. I think I got the weight of it pretty good.
Perspective Ball Bounce (Assignment) from Benjamin N. Summerton on Vimeo.
A pencil test for an animation assignment I did a while back at RIT's SoFA. We had to do a ball bounce in perspective. I decided to have a little extra fun with it and pretend there were some obstacles making it bounce a little differently,
Water Balloon Roll (Assignment) from Benjamin N. Summerton on Vimeo.
A pencil test for an animation assignment I did a while back at RIT's SoFA. We had to have a balloon filled with water roll off of an imaginary ledge.
Paper Fall (Assignment) from Benjamin N. Summerton on Vimeo.
A pencil test for animation assignment I did a while back at RIT's SoFA. We had to made a sheet of paper fall through air; I had a little fun with it.
Flour Sack Jump (Assignment) from Benjamin N. Summerton on Vimeo.
A pencil test for an animation assignment I did a while back at RIT's SoFA. We had to have a sack of flour do a little jump.
Flour Sack Getup, Walk, & Slip (Assignment) from Benjamin N. Summerton on Vimeo.
A pencil test for an animation assignment I did a while back at RIT's SoFA. We had to have a sack of flour wake/get up, walk a step or two, then fall over.
Tarzan (Assignment) from Benjamin N. Summerton on Vimeo.
A pencil test for an animation assignment I did a while back at RIT's SoFA. We were given a sequence of key frames of a "Tarzan," character doing a jump n' swing with a rope, and we had to do everything in between.
Walk Cycle (Free Exercise) (My First!) from Benjamin N. Summerton on Vimeo.
A pencil test for an animation assignment I did a while back at RIT's SoFA. We had walk cycles coming up in a few weeks, so I wanted to get a little bit ahead and try doing one myself. I showed it to one of my animation professors and he said that it looked more like a march... I guess that's something at least. :P
Walk & Run Cycles (Assignment) from Benjamin N. Summerton on Vimeo.
A pencil test for an animation assignment I did a while back at RIT's SoFA. We had to do walk and run cycles (in place), along with a transition for each.
Lip Sync (Assignment) from Benjamin N. Summerton on Vimeo.
A paper cutout animation assignment I did a while back at RIT's SoFA. We had to do a lip sync, choosing any audio we liked. I went a small snippet from Charlie Chaplin's "The Great Dictator."
Before I begin, you can find the source for Blit over here.
I want to talk a little bit about a project I worked on every day from July 2014 till the end of August 2015. You may have seen a few entries about it back on earlier posts; that project was something I called “Blit.” If you’re wondering what Blit was, it was my attempt at trying to make an Animation focused art program. It was pretty ambitious for someone like me at the time.
There were two main reasons why I started to work on it:
- Back when I was an undergraduate, I was part of a student group were we had to do these things called “major projects,” each year if we wanted to retain membership. They usually are of a technical nature (programming & engineering). This is where my initial drive came from
 - I’ve never worked on a “large,” or “longterm,” project before. Everything else I’ve done up till that point were small things like class assignments, course projects, or tasks for my internships. I had friends who had worked on their own projects for two or three years straight and made some really cool stuff. I really wanted to be able to tell others (mainly prospective employers) “Yeah, I’ve been working on this thing for over year. Want to take a look?” Other than just “having something,” I also wanted to learn how to manage a larger and lengthier project.
 
The “major project,” was something that was pretty easy to fulfill. But for the second I did something kind of stupid, but worked well for me. I told myself “Alright, I’m going to work on a project that will have a 365 day long GitHub streak.” In reality, git streaks are a silly thing to track progress. I was working on Blit in a private repo, so the outside world would not see my streak at all. I feel bad for the people who have the need to maintain one. For me it was a reminder to build on Blit each day. It worked.
Whether it be programming, logging issues, source code cleanup, design & planing, writing documentation, etc., I had to minimum goal of one meaningful commit per day. Normally I would spend an hour on Blit per day (more on the weekends). I would keep at it until the kitchen timer to my side beeped. Somehow that little thing was able to keep me focused for a straight hour.
So What Is (or Was) Blit?
I’ve always been someone who’s liked art and programming. Especially combining the two. One of my favorite genres is pixel art, or sprites as they are also known. I’ve dabbled in making a few other art programs before, but nothing like this.
Originally Blit supposed to be only a sprite animation tool that had a modern look and feel, but my ideas for it grew greater (*sigh* feature creep). There are many other sprinting tools out there like GrafX2, Aseprite, (and other 2D animation programs like TVPaint). I’m not saying that it’s wrong that they make their own GUI toolkit, but it feels kind of odd. I really wanted to bring these types of programs out of the days of the Amiga. After doing some initial research, I settled on using Qt. Here are my reasons:
- It’s cross platform. I work on a Linux system, but I want my Windows and OS X friends to be able to use what I make
 - It’s a C++ library; my native tongue. But there exists bindings to other languages, such as Python
 - There’s a lot more to Qt than just widgets. It really is a fully featured desktop application framework
 - It has a massive community around it and it’s very well documented. So if I ever ran into trouble I’d be able to find some help
 
Before I move any further, you might be wondering where the name “Blit.” came from. Since it had a focus on 2D graphics, the name came from the “Bit blit,” algorithm. I used to do a lot of programming with libSDL, so the function SDL_BlitSurface() has been burned into my brain. I thought it would be a cute name too.
I also wanted to keep more of a “traditional animation,” approach to Blit. Instead of drawing on images there were “Cels.” Layers were called “Planes.” Instead of a Dope Sheets I had “Exposure Sheets.” I didn’t call it “onion skinning,” but “turning on the Light Table.”
Starting Out
As mentioned before, I was focused on sprite animation (originally). I wanted to keep things as easy as possible. While I did consider using Qt’s native C++ libraries, I decided on making the program in Python with PyQt. Scripting languages are typically much faster to write code for. I felt as if I would be able to get more done in less time. I didn’t think that there would be too many computationally intensive procedures to worry about. In the event that I needed some performance boost, I could always write a C/C++ extension for Python.
After choosing my tools, the first thing I did was draft some design documents. These included a user interface mockup and an initial file format structure. I started to log tickets on the GitHub issue tracker. I had an miniature road map to start from. Within a month and a half, I was able to load up one of my files into Blit, do a little simple Cel & Frame editing, and then save it. You couldn’t do too much with it, but I thought it was a good starting point.
During my initial research of Qt, I discovered something called the “Graphics View Framework.” There were a lot of widgets that I had to custom make such as the Timeline or the Canvas; it made my life much easier. It really is one of the nice features of Qt. If you’re making a heavily graphical application you should take a look into it.
Despite being able to get a basic animation loaded, edited and played back, I was starting to run into some issues with the development language: Python. I had issues with things like circular imports and nested imports (python files imported from many directories deep). I don’t want to go into the details of how they were affecting me and the project, but all I can say is that they were driving me up the wall. So I devised a solution: Switch to C++.
Now, switching development languages is not always something that’s advised. But at the point where I was, it was feasible to do and would possibly have a better impact on my project. Nested imports are a non-issue in C++ and the circular imports are fixed with simple include guards. On top of that, I wouldn’t have to use PyQt’s bindings anymore and Python would not be a performance bottleneck since it would be gone. Working at my usual hour a day pace, it took somewhere between two and three weeks to port everything I had to C++. I wasn’t happy about losing that time to work on new features, but I think it was a better choice in the end.
I didn’t entirely ditch Python & PyQt. If I needed to prototype a widget, I would use those tools. It helped to realize ideas pretty quickly, then later I would integrate it into the C++ source.
Feature Creep, “Future Planning,” and Broadening Horizons
In the first couple of months that I was working on Blit, more ideas started to pour into my head of what it could or should be able to do. We all know what this is; Feature Creep. Whenever I though of a cool new thing I wanted to add, I usually weighed the cost of adding it in within my current milestone, the next, or burring it in the issue tracker. This is where I developed the “Future Planning,” tag. If something popped into my head, almost 95% of the time I would not mark it under any milestone and put it under that tag. It was a good way for me to tell myself “Alright, I think this would be a good thing, but I need to focus on other stuff right now.” This worked actually pretty well for me. At all times, the most populous tag in my issue tracker was the “Future Planning,” one.
Around 100 days into the project, I felt like I had a good direction that I was going in. I was nearing the end of my (second) internship and I would be left with nearly two months before classes would begin again. With all of this free time, I set myself the goal of “Be able to draw a bouncing ball animation and export it as a Spritesheet,” before Christmas hit. I achieved that.
By this time you could move Cels around on the Frames, move the Frames on the Timeline, and adjust their hold values. I think I focused more on the staging of objects rather than editing them. To work on this shortcoming, I decided to start work on a Tool interface. I had the idea that editing tools should be plugins and people should be able to write their own; a very common idea in art applications. Instead of only “put pixel,” and “erase pixel,” I added line/shape drawing, filling, and was working on a soft brush tool.
When I got back to school I fulfilled that first goal of passing it as a “major project,” in my student group. It was well received for what it was at the time, a very simple pixel art animation tool. Though, I started to think more beyond simple spriting. Not only do I consider myself a fan of Animation, but someone who really enjoys making it. I started to ponder “What if Blit could be used for all sorts of 2D animation, not just pixel art?”
I didn’t think it would be too hard to add a camera hookup to the program (something that I’ve done with Qt before), so Blit could be turned into an application to do pencil tests, capture paper drawn animation, or even stop motion. My rule became “If it’s Bitmap based, Blit should be able to do something with it.” I also thought that there wasn’t a good free (both as in beer and speech) software solution to 2D computer animation. TVPaint, Dragonframe, and FlipBook were used a lot in the animation department. I can understand the expensive cost of them for professionals and that it’s niche software, but it really sucks for students who want to learn how to animate, but already were paying a small fortune for their college tuition.
To make Blit more generic, it had to undergo something I called dubbed “The Grand Refactoring.” The whole animation module was like this: an Animation owns an XSheet, which owns a list of Frames, where each Frame owns a list of Cels. No reuse. This was good to get started with, but pretty bad since in the real world animation is reused all of the damn time. So I devised up this system instead:
As it would force me to fix up almost every single thing in the program that touched the Animation module (including the file format), I set this to be its own “half milestone.” It took about a month and a half to complete. It really sucked not being to add any new features for that time; only endless refactoring. At the end of that, all the logic was in the code to be able stage the same Cel across multiple Frames, or instance a Frame multiple times in the Timeline. Though, because I was focused on fixing things up, I didn’t add in an interface where the user could actually reuse Cels and Frames. If they wanted to, they would have to edit the sequence.xml file. So it was there, it worked, but wasn’t usable by the layman.
While taking classes and juggling other (smaller) projects it sometimes became difficult to make meaningful contributions to Blit. I tried to stick to my “one hour a day rule,” but that became hard. Also, refactoring isn’t fun. You don’t get to see new features, you’re restructuring stuff that already exist. You might also break things and then have to spend time fixing them. It’s hard to stay motivated when nothing is new or exciting.
My brain was fried after writing code for my class assignments. I found that (better) documenting the source code, reviewing issue tracker tickets, and revisiting design documents wasn’t too hard. If I recall correctly there was a two week stint were that was all that I did.
Despite all these speed bumps, I got to do something really cool with Blit at the end of the year. If you’ve read some of my older blog posts, you may have seen this thing I made called MEGA_MATRIX. For those of you who don’t know what it was, it is a 24x24 LED Dot Matrix display. I actually developed it in tandem with Blit during the early days of the application. Anyways, at the end of the year my college hosts what is essentially a campus wide show and tell day. I thought it would be neat If I could let people doodle animations in Blit, then upload them onto MEGA_MATRIX. Turns out it was. I made a special fork of Blit called “The MEGA_MATRIX Edition,” where I only let users draw in two colors (red and black), preview their animations, and then upload them to an Arduino to drive the display. One of my friends said it was his favorite thing at the festival because “[I] practically made a hardware implementation of Mario Paint.”
Altered Scope, One Full Year, and the End of Development
At the beginning of 2015’s summer, I was off to my next internship. During the day I would write C# code for a rendering infrastructure. After work I would exercise, watch some TV, play a few video games, but also work on Blit, for well, at least an hour a day.
After “The Grand Refactoring,” and the MEGA_MATRIX Edition I was able to get a few more features out of the way. Changing the Canvas’ backdrop color, pixel grid, selective playback, a color picker tool and more. One of my favorite additions was onion skinning (I called it the Light Table). Thanks to the newly redesigned Animation module, it actually made it pretty easy to implement.
Then sometime in mid July I hit my second goal; hold onto a GitHub streak for one year straight.
The code for Blit was starting to get really huge at this point. I still was able to manage it myself, but it started to become a bit of a chore too. I also spent a lot more time refactoring and fixing existing code rather than working on new features. I feel like I lost a little of my drive then. As my two initial goals were achieved I could have stopped here. But for some reason, I didn’t want to. I kept on pushing.
My internship came to an end, I had a week at home, and then I was off to another internship. All of the previous places were I interned let me work on outside projects if I wanted to. As long as it wasn’t during work time, with work equipment, or a competing product I was free to do what I want. This time around, my employer asked me to stop working on outside projects all together.
While I felt that work on Blit was starting to go stale I still didn’t feel to happy about having to quit development. I could have worked on it in secret, but that didn’t feel right to me. So, right before leaving for the first day of work I made an early morning final commit to the Blit repo. It was kind of poetic that my ending streak was exactly 400 days long.
In the month that followed, I was bummed that I wasn’t able to add an interface for the reusable Cels/Frames, the Brush and Resize tools were still unfinished, no work on multiple planes was ever done (Cel layering existed though), but worst of all, I feel that it sucked when trying to make sprites; the original goal of Blit. I still had ideas popping into my head. Such as using FFmpeg to export animations as animated GIFs. All I could do is just scribble them down on some note paper and file it away for when I was done with my current internship.
So four months down the road I was done with my final practicum. Did I start back working on Blit? No. The previous month was pretty turbulent for me, as well as the next couple. It was my last semester at college and I was more focused on graduating. I still had ideas coming into my head for Blit, but they would go into the issue tracker instead of the code. I felt that I was way too out of it to startup work back on Blit. I also realized how much of a behemoth the source had become. Thus I decided to put it on hiatus indefinitely.
Final Lookback and the Future
Almost everything I’ve done is a learning project for me. Some of I learnt very little from, others a lot. Working on Blit taught me so much more about Qt than I ever wanted to know. Hell, in the process of developing Blit I spotted a minor bug in Qt and was able to submit a(n) (accepted) patch to the project. That was one of the more rewarding moments, as I’ve never contributed to a major open source project before.
But the main thing I gained from Blit was learning how to manage/handle/organize a larger project. I was never involved with issue tracking, documentation, and design so much before. As stupid of an idea it was to maintain a year long GitHub streak, it somehow worked for me. It was fun to show off the streak to my friends, it was really there for me to motive myself.
While building Blit, one the things I always wanted to do was work on it with other people. Though, I kept it in a private repo I always had the intention of releasing the source code when I was done with some of the core features. While many of my friends thought it was interesting, I couldn’t find anyone else who wanted to work on it. I always made sure to keep good documentation of the design and source code for this reason. I really wish I had others to help me with this, not only so that I could have had Blit in a much further state, but also so I could learn how to collaborate with others better too.
It’s now been a year since I last touched Blit. At the beginning of this past Summer there was a monkey on my back to figure out “the future of Blit.” I know I wanted to release the source for it, but I’m not sure where I want to go with it. In the past year Dwango released OpenToonz and Krita has added some animation tools. Both of these have much better drawing capabilities. It’s hard to compete.
I have a small desire to restart work on Blit. For example, adding a camera connection to shoot paper drawn animation or working on some FFmpeg exporting. But I have other priorities right now. If I had to do it again, I would want to write Blit in C# instead of C++. I’ve grown to love C# a lot in the past year and development in it is much easier than C++, and performance is still pretty good. I really hope that QtSharp can get off of the ground sometime soon.
If you want to check out the source for Blit, you can find it over here: gitlab.com/define-private-public/blit. If you want to see some of my fabulous source documention, it’s at: https://blit.gitlab.io/SourceDocs/. And if in the slightest chance that you’re interested in working on Blit, please contact me.
To end with, here are some stats:
- 97 source code files
 - 8,175 lines of code (95% C++)
 - 400 days of contributions
 - 364 issues tracked
 - 3,151 commits
 - 91,528 additions, 65,617 deletions
 - An unknown amount of users
 - and 1 developer (me)
 
Back around mid to late July in 2014 I set out to create Blit. One year on now (as of last Wednesday), I've made a lot of progress from practically nothing. Thinking back I ask myself “why did I want to make Blit?”
I've made many other projects before. Some of them successes whereas others were really failures (cough buzz cough). Those projects had something in common. They were short, small, and contained. When I look back on all of the stuff that I've made, I noticed that there was nothing that I could call a “grand,” or “large,” project. I wanted something that I could call a major project that I built. More importantly I wanted to know how to manage a larger project; something I've never done before.
So I set a goal for myself: “Make something large. Something that you can never really call 'complete,' but work on it for an entire year.” That's what I did. I had nothing more than a silly GitHub streak to motivate me. It now says “371 days.” There were a few days where all I did was just update a TODO file, add some extra documentation, or ones that I didn't want to work on it at all (but I did anyways).
Blit still isn't really what I originally imaged it to be. Some sort of 2D Animation solution for pixel art and larger things (and hopefully pencil testing too). And I'm still not sure 100% what I want out of it. I consider what I've done so far to be nothing more than a prototype for a future vision.
I've met my goal of “work on something for a year,” but I plan to still chug along with it until I feel that I'm done. It's been fun so far.
Here are some stats:
- 371 days of continuous development (avg. one hour per day, more on the weekends)
 - 75,868 additions / 47,902 deletions
 - 7,846 lines of code (core application, mostly C++ w/ Qt)
 - 2,887 commits
 - 352 tickets
 - 247 closed
 - 105 open
 - 43 more issues until the next one
 - 12 (active) branches
 - 1 (and a 1/2) milestones completed
 - 1 contributor (me)
 
This is what my network looks like.
Cheers.
I was planning on doing another post after Imagine RIT, but I've been pretty busy. I was able to display off MEGA_MATRIX there along with a small fork of Blit where you can create animations then upload them to the device. It's was pretty popular with the kids. I'll be posting some of the creations soon enough.
Speaking of Blit I have been still working on it daily since the first release back in February. The big thing that I had to do was refactor the underlying monolithic Animation module into a more flexible and reusable system.
Some small stats:
- 3.5 months
 - +1,100 lines of code (exactly)
 - 655 commits
 - 51 tickets (felt like 151)
 
I've also done a few other things like add a shape tool, line tool, fill tool (you know, the basics), and a few icons. There are many other features that I want to add too like exporting to GIF and video files. I think I'll be able to get them done for P-2.
























