Getting a life?

I've just been so very busy with everything.

So, first off: work. I've continued working my day-job almost full-time, and that has reflected on the amount of free time I have. On the weekdays I never have the time nor the energy for anything productive, and weekends must be shared with everything else.

Despite all that I got some great work done on Tracker v3 earlier this year. I should write a separate blog post about it, as it is a big topic. But in short I ended up writing my own schema and code generation instead of FlatBuffers. Now initially the messages are serialized as JSON, but I have plans to move to a more effective format when I get everything else done. I also spent time figuring out an actor system called Akka.Net, which I also used for networking initially. Akka can be thought as a message processing framework. Unfortunately the networked performance of Akka isn't very good, and the message-based communication seemed to get too complex when combined with (potentially branching) asynchronous communication. I had a good prototype, but moved on to a "simpler" custom service and networking implementation focusing on request-response model. I'm basing the networking code on Tracker v2 and made some good progress, but it is still in progress. My biggest challenge currently is that I would have to decide on what kind of internal architecture I want to support, and have A LOT of open questions and pending work (oh god, will it ever be done???):
  • Support service instance sharing?
  • Other ways of sharing data?
  • Service instance pooling, for example for database workers?
  • Task-based or not Task-based?
  • How to best support asynchronity, while keeping good performance?
  • Message queueing and buffering? Optimized data structures and piping code.
  • Back-pressure?
  • Am I really going to write my Master's Thesis on this??
And as if this isn't enough, I've been FINALLY trying to up my social life. And I can say without a doubt that this has taken a considerable amount of my energy. Without giving too much back. Depressing. I've made some new friendish connections, but nothing more than that.

And - again - as if I already didn't have enough things to do, I also tried making some music, as I couldn't resist getting a Novation Launchpad Pro. Another reason was that I was going to make all kinds of non-music stuff with it, but so far that has limited to making a rather simple Guild Wars 2 key mapping for it, for playing a music-themed minigame. Haven't done anything new with it lately.

Then there is also another time sink. I've wanted a good video camera for a long long time for various reasons, so I finally acted on that. Got a GoPro Hero 5 and the Karma Grip. Was going to take them to a holiday trip, but the delivery dragged on and on. So no holiday video. I shot some test material with, watched a lot more in Youtube, and in the end got bit by a camera-bug. This infestation caused me to upgrade the GoPro to a Panasonic Lumix GH5 with a 20mm f1.7 lens (H-H020, the only lens that was available to ship immediately). I've had the camera for about a month now, and am in the process of getting a 12-35mm f2.8 lens (H-HSA12035), because the camera has amazing video quality, and I'd like to use it more than the 20mm allows. The only weakest point is the autofocus. But despite this I have yet again a new hobby: vlogging. I already shot and edited the pilot-episode (cooked some food) on one Saturday (took almost the whole day!).

The episode turned out quite ok, but I'm still debating whether I dare to publish it, even after some tweaks. The thing is, that I've continued the self-discovery catalyzed by Life is Strange, and that video shows more of the unfiltered and buried personality of mine. I feel I'm not yet ready to expose that to the world. But even if I never publish that - or any other episode - shooting the video was a fun and therapeutic experience. The editing not that much .P

So, a lot has happened, and will continue to happen, but at a lot slower pace. At least when it comes to programming projects. I feel great sadness about this, but I guess this is called getting a life?

The dawn of Tracker v2

Why: Gotta go fast! And besides, reinventing the wheel is the best way to build an understanding of it.

What: C# RPC / PubSub server. Clients in C# and Javascript. About 400k requests per second per core, with just a single client connection.

How: .NET Core. FlatBuffers. Persistent connections. Code generation.

Background

So… For the past month or so I’ve been working on an improved server for Tracker v1. Tracker v1 is a project I’ve worked on for almost two years now, though there has only been few bigger sprints, rest of the time it’s been just running nicely. Unfortunately I’ve been unable to find the time to properly write about it, so maybe this’ll have to do.

As the name might suggest, Tracker is a system for realtime tracking of connected devices. The server is contained in a single Python process(and a MySQL database). There exists some additional tools, and also the tracking client for modern Android phones. The trackable devices use short, encrypted UDP-datagrams for sending the position data to the server. This position data is then relayed to connected clients via WebSockets, and then overlaid on a map.

The data is also saved to database, so that in can later be analyzed to trips(which are just trip updates clustered using time gaps), and viewed using the web interface. The web interface also includes user management. The Android client is configured by reading a QR-code generated using the web interface. The code includes the server address, device identity and encryption key.

The system is designed so that the devices do not require any return channel to the server, so that they could, in theory, also be used over one-way radio links. Latest addition for the UDP-protocol includes an optional reply message, though. Without, it would require extra effort to verify the connectivity and the use of a recent enough encryption key and packet id. I’d like to avoid using TCP in the client, but there are some supporting functions for authenticating the web interface, and updating trip description.

Pre-post-mortem

V1 works just fine, is stable, and somewhat feature-complete. So why the need for a new version? Because I have BIG PLANS. While I haven’t bothered to benchmark the current version(but I really should), I’m quite certain, that it will not work with 100k+ or 1M+ users. V1 just doesn’t scale. The server is just a single process, running on a single core. While there are some components that could be split, the fact still remains. It just doesn’t scale.

So, learning from the past we can see that there are a lot of things we can do better. Also, this is a great opportunity to do some really exciting high-scalability stuff!

A new approach

Aaand I already forgot everything, and initially planned and prototyped a new monolithic architecture. Unfortunately the monolith would have required all the business logic to be written in C/C++ (and preferably running on a bare-metal unikernel) to reach adequate performance, and still there wouldn’t have been any guarantees on the level of performance. It would also be a single point of failure: when it would fail, it would be messy.

So, let’s go the other direction this time, for real. In v1 there already was some momentum in the direction of separate services. The geospatial queries were offloaded to a stand-alone microservice, and there were plans for moving the UDP handling and decryption to a separate process. So, I’m now proposing an all-new microservice-inspired architecture, where many of the tasks are running with only minimal inter-service dependencies. This way the load can be spread to multiple machines, and maybe, just maybe, there’ll be a way to make the system more resilient to outages in individual services.

But how about the clients. They could, in theory, communicate directly with individual services, but user authentication, service discovery and security go so much smoother if there is only one or two endpoints the clients connect. These connection points would then pass the messages to relevant internal services.

And this, dear reader, is what this post is all about.

One proxy to rule them all

This client communication endpoint should thereby be able to transmit - and possibly translate - message from clients to the internal services, and vice versa. And because it would be extremely wasteful to open new internal connection for each client, the communication should be handled using a message bus of some sort. 

Most of the messages the endpoint can just directly proxy, but for others it needs to have some intelligence of it’s own. It should be able to enrich the requests using the user identifier, and thereby also handle the user authentication, at least on some level.

To keep the proxy simple, user authentication should be the only state it contains (and maybe some subscription state, so that the proxy can subscribe only once for each topic). This allows for running multiple proxies at the same time, evenly handling the client requests. A separate load balancer is thereby not required.

(In ideal case the proxy would also handle service discovery, failure detection, and automatic failover. As part of this mechanism, the proxy could also - if it doesn’t make it too complicated - be the primary location to set feature flags. Feature flags are toggles than the system administrator can set to disable parts of the system even if no faults are present. The flags could, for example, set some internal service read-only, or disable it altogether. Rest of the features will then continue working, if they do not require access to that service. For example, the user service could be disabled for maintenance, but all existing authenticated connections would continue working. This, though, gets more complicated if there are dependencies between the services.)

And this proxy is what I’ve been working on now.

One serialization format to bind them all

I place my bets on a strongly structured binary protocol that can be read without additional copy operations. (Just like I’m now switching from the weakly typed Python to strongly typed C#. Strong typing is very useful in eliminating many accidental mistakes when typing identifier names etc.) One such strongly typed serialization protocol is FlatBuffers(made by Google). It is much like Protocol Buffers(also made by Google): the message format is defined using a schema file, and then a code generator is run, producing the strongly typed binding to manipulate the messages. The message format supports protocol evolution, meaning that new fields can be added, and messages still stay readable for older and newer clients. Not a very important aspect in a system of this size, especially when all the parts are controlled by a single party, but it’s kinda nice to have.
table RequestSum { a:int; b:int; } table ReplySum{ sum:int; }
Listing 2. Example service.

As mentioned, the cool thing with FlatBuffers is that they are extremely fast to read, only about few times slower than accessing raw structs(due to the vtable-based offset-system required to support the compatibility). No additional processing is required to access the data, and building the messages is equally straightforward, requiring no additional allocations in addition to the single builder pool allocation.

And one extra complexity layer to unite them all

Like many other serialization formats, FlatBuffers doesn’t know anything about the concept of RPC. Because of this, I made my own layer on top. A service file defines named remote methods, that take a specific message type as an argument, and return another type. The message types themselves are defined in the FlatBuffers schema file.

service SomeApi { sum: RequestSum -> ReplySum; sub: RequestSub -> void; pub: RequestPub -> void; }
Listing 2. Example service.

After a service has been defined, few code generating tools are run to generate the template code for the server, and to define an interface the clients can invoke to make calls to the service. This code generation step also creates the identifiers that tie message names to actual protocol level identifiers. These identifiers are generated for events and errors too, not just the requests and responses mentioned in the example. Those two do not require extra definitions in the service file (at least not for the moment, it might be nice to explicitly define them, too).

And it works

It was quite an effort, but preliminary testing gives very nice performance numbers. C# server, running on .NET Core RC2. TCP, minimal framing protocol for messages. Pipelining. A single Intel Core i7-4790k server core can handle about 400 000 request/response pairs per second. I’ve yet to test this using multiple clients, but I have high hopes. Those hopes might get shattered though, as only one request is being executed at once (per connection). This of course not a problem if all the operations happen in-memory, but throw even 1ms of IO latency there, and request rate drops to 1000/s…

The plan for the future is to - obviously - solve that little problem, and then to clean up code generation, and tidy up rest of the code, improve the tester, and then maybe finally get to writing some business logic.

Configuring HAProxy 1.4 to do host-based reverse proxying

Foreword: the new(er) HAProxy 1.5 supports map-based hosts, which are the recommended way. But here’s a guide for those of us who are stuck with the older version.

So, you’ve got several application servers, with ports all over the place. How to organize this mess to be accessible more easily? By using DNS and a reverse proxy that is aware of the Host-header. HAProxy is perfect for this. High performance and low footprint. As a bonus HAProxy can also be configured to terminate HTTPS requests so that even your dumbest services can benefit from encryption!

For setting up the proxying, here’s a handy little list:

  1. Install; on Debian, run apt-get install haproxy
  2. Configure; take a look at this paste and copy the contents to /etc/haproxy/haproxy.cfg
  3. Enable; to mark that you’ve actually edited the configuration, go to /etc/default/haproxy and set ENABLED=1.
  4. Run; service haproxy restart.

And that is it! Now you have a basic HAProxy installation that reverse proxies requests to two different hosts/ports based on the Host-header. Simply add more backends and acl/use_proxy combos to introduce new services.

But the fun doesn’t end here! Now you have a bunch of backend servers whose requests are all originating from a simple host, and that breaks ACLs and logging and everything! To fix this you’ll have to go manually through each and every service and make the necessary configurations.

For example, to make HFS to trust the X-Forwarded-From header set by HAProxy, you’ll have to edit its configuration file manually, as per this guide.

For Apache there exists a whole module for this: mod_remoteip. Simply include the module and set RemoteIPHeader X-Forwarded-For and RemoteIPInternalProxy proxy_ip_here. You may also need to change %h to %a in LogFormat to get the logging to work correctly.

No matter what you are using, the common thing is to mark your proxy machine as trusted, so that the real remote IP can be read from the header. Be aware that the header contains a comma separated list of proxies(or just multiple consecutive headers of the same name), and the last one is your proxy. The rest can be freely set by the client, and can not be trusted.

On 'Life is Strange' – part II

Wowzers. What a journey.

This post ended up taking a lot longer to write and in the process turned up a lot longer than I first anticipated. And I still feel this everything that could be said(and nor was the previous part).

Most of this was written in the immediate days after the release of the final episode and most of the editing was done by the end of that week. Afterwards it still took few extra weeks to recover enough to even be able to get everything together and add some missing observations.

But to get on with this: first some initial thoughts about the final episode. I’ll try to be vague; there shouldn’t be any spoilers. These thoughts are complemented with some self-reflection.

Episode 5

While the final episode was enjoyable, it lacked some of the magic all the previous episodes had. I don’t know if it was because of the fact that I wasn’t still done processing the previous episodes, or the fact that I had so much other stuff distracting me in real life. In any case I felt slightly disconnected.

Or maybe it was the fact that the episode was more action-oriented where the previous episodes were more dialog-oriented. Also, with so many different locations and quick transitions between them the episode felt a bit rushed. But then again, it’s also about how you view the thing. Squeezing together tons of different fan theories(intentional or not) and sensibly finishing a time travel story is definitely not easy.

There exists a variety of arguments to be made for and against the final episode and especially the endings. The disparity of polish between the endings was quite disappointing and a lot was left to be desired. And the overall feeling of sadness about the end of this all is completely another matter...

After watching the credits I still had to spend maybe five minutes just staring at the main menu listening to the music, not really comprehending what had happened nor that the game really was over.

Craving for closure

As above - and like I so subtly hinted in the previous part - Life is Strange touched me with an unexpected intensity. Partly because of the game itself(the story, the characters and the atmosphere) and partly because of how it led to some pretty major self-reflection. First about the game itself.

There is so much I want to say.
So much emotion.
So many thoughts.
So much everything.
And while that everything was.. everything, it also was almost too much.

And now the end is here and I feel empty. The closure wasn’t what I was expecting, nor was it was I was hoping. Instead, it was what I needed.

While the endings left A LOT open, they also had an adequate amount of closure to keep me from totally collapsing. This allowed me to limp to the game’s reddit community, where the feelings could be shared. Thank you all. In addition, after I had played the episodes 3&4 that I discussed in the previous post, I listened to PSNStores podcasts about those episodes. This helped me a great deal in processing what happened with those episodes.

The more time I spent reading reddit and watching interviews, the better I finally felt. Now that I’m writing this particular paragraph weeks later, I’m almost completely at ease with everything. I’ve had time to research how the final episodes, and in particular the ending, is supposed to be understood.

* * *

As the game’s developers have told in many occasions, the game was about the personal growth of Max. A nostalgic coming-of-age story. This is a crucial cornerstone to understand. The relations between characters were crafted to be so perfect and special. For example Chloe was crafted to be THE perfect friend with a deep emotional connection with Max / the player, evoking a longing for such a person in real life. But real life does not work this way. It just doesn’t.

* * *

To adequately process coming-of-age stories, there needs to be some reflection on one’s own life. This is the part that most definitely changed me. Some details below, but the gist of it is that while affecting (at least in the short term) on how I see the world, this experience also made me realize certain rather grander / fundamental things about life. Life is so weird.

The most immediate realization from this whole experience: everything will come to an end and there is nothing you can do. Ends have to be endured. It’s hard to endure everything alone, and for that you need someone, or someones. (In this case primarily /r/LifeIsStrange and the podcasts. I also had some friends, but their role was just to be an audience while I announced how this game had had such an impact on me. But that helped, too.)

In the end you’ll feel weird and dull, but also oddly at ease: there is nothing you can do now. I’ll never forget the journey, or how it helped me grow.

Transforming life

As the game and the setting were so greatly crafted, it was really easy to actually become Max, not just be someone who control’s her avatar. Not many games can accomplish this. Almost without noticing it I had slipped to be in the wonderful nostalgia-colored teenage-life of Maxine Caulfield.

This glimpse to another life. Life of an adorable, slightly geeky girl who likes photography and innocently loves to observe the world. But you can’t change your life just like that; I am not Max, nor is her life mine. No matter how much I hoped to be Max, it was not going to happen. But you can try to slowly change yourself.

My immediate reaction to this was of course to try and be more like Max, try to observe the world with that same kind of non-judging, all-seeing way. But it’s not that easy. While being bit of a stretch, I do have moments when I feel emotions somewhat comparable to hers. Not everything is perfect, but I’m pretty good in what I do and how I have my future planned. I do have the occasional moments of feeling great in life. While not happening too often, I’ve also had some good moments with friends. I should just embrace who I am, no matter what.

And you don’t actually need to have an opinion on everything, just keep an open view on the world. Don’t just plod through everything without taking a moment to appreciate what you are doing.

Do this and maybe you’ll end up more like Max. More like a better person. And don’t try to necessarily change the world, change your view on it.

Emotional layers

Having continued on this path of self-reflection were are now arriving at the very core.

This experience has finally had me realize that there is multiple layers(or segments or whatever to call them) of me. Sure, layering is a known psychological theory, but I didn’t realize just how accurate it was and that I too implemented it. There are those layers I show at work or when studying. I know it’s necessary to have some emotional separation, but that also makes be feel incomplete. And then there is that one layer at the core was affected by all this. Maybe that is the real me?

I’ve been under a lot of stress this semester, and as a coping mechanism I’ve had to segment myself to multiple distinct though-spheres(wtf is that word). Sure there is some crosstalk, but it has stayed low. While this has helped me to focus on the task at hand, I’ve began to feel the wearing effects of maintaining that emotional isolation.

There’s always been those segments, but lately they have been even more isolated. The pressure building up.

The outer layer is divided to two distinct things. There is one me for studying and another one for work. Protected by those there is the normal me for friends, gaming and living in general. But that is not everything. There have been occasional hints about an isolated layer below, but nothing really concrete.

But now this game pierced through all those layers and exposed that very core underneath, the ‘real me’ - or at least as real as it can get. There was a reason that core was isolated. It’s sensitive. And this game was . It cracked that isolation up. I’m in ruins. I’ve tried to keep everything from imploding, but it has not been easy.

Maybe the game was an escape?

Total(ish) immersion, or whatever?..

* * *

I’m actually having difficulty finalizing this section, as that would mean I accept all this.

Where has the time gone?

I don’t know how I would have fared had I not had an almost perfectly timed semester break this week. I still went to work, but didn’t have to worry about exercises and lectures. Instead I had time to focus on all of this: process everything(or as much as I could/can) and stumble for closure.

Like a comment in reddit said, it makes no logical sense for a video game or fictional characters to evoke this much emotion. But this is art, and art is supposed to have some type of an effect.

I wish I could stay in this moment forever. But then, I guess it wouldn't be a moment.

On 'Life is Strange'

A story first, skip ahead for the actual review.

Not long after the game's initial release I picked up the first episode and was immediately hooked. Played through it in a weekend, and then for a second time with different choices.

I immersed myself completely into the world and story, and it was intense. Couldn't even think about playing another episode for a whole week. It actually took well over a month before I could play the second episode.

And the second episode was even better. Now I had to take an even longer break.

When I finally resumed playing, both ep3 and ep4 were out, and ep5 just two weeks away. I tried to pace myself by just playing ep3 during one weekend, and then ep4 the next.

I failed. Ended up playing ep3 in a single day. And because it ended with such a cliffhanger I just had to play ep4 the next day.

That was a dire mistake. Now I'm broken and feel empty and hollow. Couldn't even function properly for the rest of the day(or the next).

* * *

This game is larger than life. The game's protagonist is a photographer, and through her eyes it is seen how vibrant and colorful the world actually is. The atmosphere is truly captivating and full of wonder, and the plot something unexpected.

It took a long time to quantify, but I finally figured out why the game resonated so strongly with me. My life is quite dull and boring, and immersing myself completely to the game world and its characters allowed me to break free from that grayness, and experience the full spectrum of the shades that is life.

There is also the role playing aspect. I'm normally not that social. I stick to the routine and am quite cautious on trying out new things. But the game's protagonist is social. Routine is broken by the unfolding events and that leads to trying out new things. Even the character interactions allow for experimentation thanks to the rewind ability.

This all is so much more than the gray ordinariness of (my) real life. The withdrawals from stopping playing are real and hit me hard. Combine this with an awesome plot that you can influence in a real way. Add a setting that allows to partly (re?)live what I kinda missed growing up. And finally add the very likeable protagonist, a great selection of songs and the very fitting and beautiful graphics(and not a single problem with performance).

6/5. Will play again - when I recover.

(Also, there was a great opinion piece on PCGamer by Jody Macgregor, I highly recommend reading it.)

Plotting GPS data

Sometimes going out for just a walk isn’t that easy and some extra motivation is needed. Luckily I had just that extra: going for a walk allowed me to get some rather important real-world data for the GPS tracking service I have been working for quite some time.

During those walks I had the idea to further use the recorded data. The forest was filled with paths and I thought it’d be great to map those. And maybe even have some kind of heatmap of the most traveled routes!
Work, studies, gaming and general procrastination kept me busy, but here it finally is:

Investigating TCP timeouts

As hinted by an earlier post, one of my latest work projects was a building a WebRTC based video streaming system. This system features a Websocket backend server handling the client sessions and signaling, written in Python with Gevent. My co-worker was performing some testing and everything seemed to be working just fine. Except one point he encountered a situation, where the server insisted another client was already connected, when it clearly wasn’t. Even netstat said that the TCP connection was ‘established’.

Some Websocket client connections were arbitrarily staying open even if the browser was closed! I had just added some hooks to the server to better detect disconnected / timed-out clients and a good guess was that I had messed something up. Nope. Debugged the thing for hours but couldn’t find single bug.

That is, until I tried toggling network connections. Being a system targeted for mobile devices, one facet of testing is to check how well it works with different network connections. If the network connection was turned of while the client was connected, the server couldn’t detect that until after about ten to fifteen minutes, even though it was sending keep-alive packets every 20 seconds. Strange indeed.

But maybe it wasn’t, maybe the write call was blocking the specific greenlet? That is an easy thing to test, just dump some stack traces. But nope again. How about if I run the server via strace and try to spot the writes there? It took bit of an effort, but the strace output revealed that the write calls were performed just fine! This is starting to be most troubling…

But then a revelation; write returned EPIPE. After quite a bit of research I had finally found the reason for this behavior: TCP timeouts. Turning off the network connection really did what it did. It turned off the connection without giving the protocol stack time to say its goodbyes. The server then though the client just had a really bad connection and tried resending the queued data with an exponential delay back off per TCP spec. My math didn’t quite match, but in an old thread the total timeout was calculated to be 924.6 seconds with the default TCP parameters. This was quite close to the actual timeout observed with the server.

* * *

I sighed and changed the keep-alive protocol so that timely replies were required instead of just relying on failed writes. Now it works beautifully.

Tl;dr: TCP was just really hard trying to resend data after it detected packet loss, only giving up when about fifteen whole minutes had passed.

Investigating slow startup of a gevent-based server application

Fast iteration time is critical when developing new things, and everything is fine when the server takes half a second to start. But when that server takes ten seconds to start, that's when things get annoying. So annoying that I had no choice but to spend several hours digging around for a reason for that slowdown.

The server in question was a Python/gevent/pywsgi server kinds of which I have been using quite some time now. And this was a new problem, one I had not previously encountered before: of course I wanted to get to the bottom of this.

First I tried to place some strategic print statements here and there, but those didn't help. Next I fired up the debugger and suspended the process during the startup. Nothing low-level blocking socket creation, and gevent has happily running its event loop; can't be its fault. Gevent has always done a great job not blocking anything, so that couldn't be it. (This is where I made a 'mistake', see the last paragraph.)

I was determined that it was something low-level, so it was time bring out the big guns: API Monitor. It could log every low-level API call, and then it would just be matter of digging through them all. And there indeed was some digging to be done. Then finally I found what I was looking for. The gevent event loop really did spin as it should, but the socket began processing data only after a gethostbyaddr-call returned.

That strange character is actually FE 80 00, rest of the address not rendering as a string.


This low-lever function was taking a long time to execute, and was actually executed in another thread, communicating its results via a socket. back to the main thread. For its argument it was given a link-local IPv6 address of (the first adapter from ipconfig) Hyper-V bridge adapter. Maybe due to some kind of misconfiguration or whatever that call took an excessive amount of time.

Now that I knew what was happening I wanted to know why. Intuition brought me to gevent's socket.py wrapper, where I inserted some tracking code to its implementation of gethostbyaddr. That in turn told me that as part of creating the socket the server's environmental variables were initialized. One of these was SERVER_NAME. If it was not already set, it was resolved via getfqdn - which called gethostname and that devil-ish gethostbyaddr. Only after the server name was resolved could the socket begin accepting connections.

Now that I had the general reason I didn't want to bother myself more that I had to. As the mechanism was already there, all it took was to pass environ={'SERVER_NAME': 'whatever'} as a kwargs to WSGIServer.

* * *

Had I though a bit more before letting go of the debugger and starting the API Monitor, I would probably have though about looking at the individual greenlet stacktraces. Those would have clearly told that the getfqdn call was blocking the main server greenlet.

WebRTC primer

As a relatively new technology WebRTC is still quite unheard of even though it will likely be the next Big Thing. It offers a whole new way to create interactive peer to peer multimedia applications within a browser - without requiring any additional plugins. Support is already built-in to the newest versions of Chrome on both desktops and Android. Firefox and Opera also have WebRTC capabilities, but still have some features missing.

It's bit of a chicken-egg problem really. There is not yet widespread adaptation so development doesn't have the highest priorty, and the development is not the highest priority because there is no widespread adaptation. While I can't really do much about the APIs, I can still try and present my take on the basic WebRTC connection flow. Hopefully this helps someone to create a cool WebRTC application and thereby indirectly contributing to development priorities.

But please note that this text is written as part of a project I've been working on and is not meant to be the singular introduction to WebRTC, nor is this meant to primarily be a tutorial. If you are looking for a more thorough introduction, see the great tutorial on HTML5 Rocks. After you have read that tutorial and still feel disoriented, I hope you came back here and read what I've written. Hopefully at least my diagram will clarify something.

WebRTC connection flow

In short, to establish a connection between two peers the following needs to be done:
  • Create a signaling channel between the peers
  • Get local media, and negotiate codecs
  • Perform interactive connection establishment assisted by the signaling channel
  • And finally start streaming data
This is my take on the issue. It is not the one and only way to do things, especially with the signaling channel. But then again, signaling is not covered in the WebRTC specification even though it's a very important piece of the puzzle. The easiest way is to roll your own asynchronous server using something like python-gevent or node.js, but you could as well adapt something like SIP or XMPP for this purpose.

At the very beginning both of the clients connect to this signaling / management server and mutually agree on a session. Later they then use this session to exchange the messages necessary to build their own direct connection with the steps illustrated in the diagram 1 below(use a state machine, you'll thank yourself later). Most of the functions in the diagram refer to the WebRTC-stack but some are just to illustrate a point. Also note that some functions might fail due to user actions, and some due to timeouts. Application-defined timeouts can also occur waiting for state transitions. Finally, the traffic between the peers is the traffic via the signaling server.

Diagram 1. Basic connection flow

Reverse engineering a binary code modification of an Unity3D game

This happened a while ago, but found the time to write about this only now(no surprises there).

* * *

So, during a casual conversation it came up that a single player FPS game I used to play on Android also had an official PC version. This made me quite happy, though my joy was crushed soon after: 60 degree field of view is the recipe for instant headaches. disorientation and general discomfort.

But I won’t let that stop me! No way. Although I’m quite a noob, I’ve still had some success reverse engineering various games. The game uses Unity3d web player, so I’ll just unpack that and find some strings referencing fov, and then take a look at the code and modify that pesky 60 to a more manageable number. Not that easy. And yeah, it’s not C, but C#. That’s a whole different kind of beast with it’s execution model.

But hey, there actually seems to be a CheatEngine table on Google with fov mod! Piecing it together I was finally able to modify the fov with it. Great, I just downloaded some stuff and pressed a button. l33t hax.

* * *

So, how does that thing work? Well, I also found out that the game’s code was not obsfucated. Running the game dll through a decompiler produced lovely human readable code. And look! A camera class! And it has that pesky 60 fov hardcoded in to several places! Maybe the CE script searches for it and replaces the entries? Converting 60.0f to hex and doing some manual comparisons verified that to be the likely scenario.

Then some more reading about C# IL code etc. and comparing the bytes in the script with the IL code of the relevant camera methods. Perfect match! Indeed the CE script was searching for the camera code and replacing it with one where the hardcoded fov value was different. This also explains why the script didn’t work if a level was already loaded. The IL code was already JIT compiled to another form.

* * *

This discovery means that I can look for arbitrary code in the game, parse the IL dump with my Python script and then make the relevant changes. Then use that same CE script to replace those parts of the game’s code.

Using this new ability, I also found that pesky code that was responsible for aim assist. A few NOPs and welcome full manual aiming! Though, some time later the game received an update, breaking the aim assist disabling. I’d like to revisit that section of the code some day and see what needs to be done to fix it.