About Tracker, and being perfect

How hard can it be to just make a simple service to track a phone? Not that hard. Unless you want to be a perfectionist and overengineer the whole thing, several times over. And that is why I never seem to get anywhere close to finishing Tracker.

In the last five or so years I've learned a lot about web services and would of course like to apply ALL that knowledge somewhere. A hobby project seems like the perfect place!

  • Code generation and custom tooling: avoiding repeative tasks by writing ten times as much code to maintain. And then there is this one thing that was not in the original specification and you'd basicly have to rewrite the whole thing.
  • High availability and fault tolerance: considerable additional design and implementation complexity. Good luck getting anything ready when working on the project just some occasional weekends.
  • Metrics: a whole universe in itself to set up, integrate and maintain.
  • Devops, service orchestration and distributed configuration management: yet another whole universe, with yet more design and implementation complexity. As if you even get to this point. Sure you can devops your stub services, but that if anything is wasted time.
  • Distributed tracing: better reinvent the wheel again! That way it works just the way I want.
  • Extremely high performance: each small part is tuned for the best achiveable performance before integrating it to the larger system, and never finishing anything.
And of course everything needs to be P E R F E C T!

What good is a if it doesn't function as well as theoretically possible? This intra-process message queue can only pass 300 000 messages per second?? It should be able to do at least 500 000!! Better spend a week trying to optimize it before moving on to the next thing! A decryption routine that can only decrypt 20 000 packets per second? Needs to be able to do at least twenty times as much! Better to just give up or spend ages looking for a faster implementation.

How about this config file? Why it is on the file system? How to make it seamlessly work with multiple environments with different configs. What if those configs change dynamically? Better get to work already, there is much to do!

Hey, see that single point of failure? You'd better engineer it away! What failure point? I need to first map the whole system and get realtime metrics from every single thing it does, or doesn't.

And hey, see that middleware solution for solving that one problem you have? Better not use it and write my own. That way I learn so much more about the problem domain and can leave out unnecessary parts for blazing fast performance.

I'd like to have public test server for this? I wonder how I can set that up automatically. And better make sure that it does database migrations and all that automatically.

And did you know that you could run your code inside a custom operating system to save on context switching time with networking?

Oh, and there is this web application, too. What if I used modern JavaScript this time? And then the mobile app needs to be rewritten, too.

And what about...!


A new phone :O

Times really are changing. After about two years of actively looking for a replacement for my trusty - yet slowly rotting - Nexus 5, I finally think I found a suitable model! The Samsung Galaxy S9. 4k60 OIS video, USB-PD charging with USB 3.0 and dual SIM support. And a notification LED. Eat that, Nokia! A whole lineup of attractive phones, yet no LED. Why do you do this?

Unfortunately the Galaxy S9 isn't everything I had hoped for. I found a large amount of things I dislike in just a few hours, sorted roughly by category. I sincerely hope that I can cross out these pain-points as time goes on and I adjust to them, or find good workarounds. If not: fuck.

  • It is a lot harder to pick the phone up from a flat surface.
  • The surface of the phone is extremely slippery. I wouldn't dare to try to hold it with woolen gloves, for example.
  • Extra care and thought must be put to holding the phone; the edges of the screen pick up unintended touches rather easily.
  • The phone has a very small chin, and this makes it hard to press the back-button when using it with just one hand.
  • On an otherwise smooth phone there are disturbingly rough edges on the back-edges with the charging port, and the one opposite of it.

  • The camera needs a really bright environment to work well at 60 fps. Lit indoors are just not enough.
  • Charging is limited to 10 W even though the phone supports USB-PD, which allows for 100 W.

  • Bixby can't be disabled without a Samsung account.
  • No "Development shortcut" option in developer options for easily killing and uninstalling apps. No any shortcut to kill apps!
  • Appinfo doesn't show the package name
  • Screen turns on when charging with no way to disable this.
    • A workaround is to install SnooZy Charger. The screen still turns on, but then immediately off. But it will also turn the screen off even when it was already ON prior to charging/ending charging :(
  • Bloaty software that can't be uninstalled, some not even disabled.
  • No option to disable charging LED notification.
  • Annyoing popup to buy Samsung Secure Wi-fi VPN service when connecting to new Wi-Fi networks.
  • I haven't (yet?) rooted the phone, so it is impossible to copy the settings and other saved data from some applications to the new phone. But this point applies to all other phones, too.
  • Rooing will disable Knox, a type of TPM. I see no reason for this. With rooting the TPM would be an even more important component, allowing to store cryptographic secrets while preventing them to be extracted from the device.
  • With fingerprint unlocking enabled, the phone will randomly ask for the password. No setting to make fingerprint work 100% of the time.

I guess that is everything for now. Why did I even get the new phone if I hate it so much? That is a valid question. The biggest reason is of course the fact that the older phone was becoming unuseable. The same thing that happened when I had to upgrade my otherwise perfectly fine 4790k-based desktop PC oto 8700k. That was actually even worse, as the performance gains were only marginal.

With this new phone I have high hopes that the greatly increased computing power shortens iteration times with mobile application developement when I eventually get back to doing it. And now I can once again use the phone for fun things, instead of just swearing it for being so slow. But I wonder what those fun things are. Is there other things to do with a phone other than just browse reddit? The greatly upgraded camera will hopefully also enable a whole new class of things to do.

And USB-PD support will make it easier to charge the phone when traveling light - without needing other extra investments - as it allows to charge the phone with the same charger I use for charging my laptop (Dell XPS 13). I could charge the phone via the laptop, but the USB outputs of the laptop work unreliably when it is turned off; can't risk it. But travel battery charging setup would be a whole topic on its own, so let's not get into it now.

In conclusion: I loved Nexus, can S9 get anywhere near it? That is what I'm hoping for; time will tell.

Mission accomplished?

Oh god. I’ve been more busy than ever. I guess I really got a life like I was wondering the last time. Or I just suck with time management; have I really done anything? So… Let’s review the year despite there still being plenty of it still left and see if we can answer the question in the title.

First of all, as the previous post might have made you guess, I kinda lost the motivation to develop the Tracker project further, at least for now. Again. There is just so much work to be done, and I’m just one person. And with all the things I’m going to list below I just didn’t have the time, either. I really hope that I can get back to it at a later time and better focus. This was also a conscious decision in part. The project did, and would have taken so much time, and I just had to prioritize. I also mentioned how it is a potential topic for the Master’s thesis. But before I can dive into that, I must finish my Bachelor’s. And that means passing a course in that disgusting language that is not to be named. But I’ve really tried to learn it! Been doing Duolingo almost every day since April. In addition I’ve almost completed one other online course on it as suggested by a teacher. I really really really hope that I can pass the course by the end of this year. I should have all the motivation I need, but somehow that still isn’t quite enough :(

I’ve also tried to be a bit more social, the camera I bought serving as a catalyst. Let’s just say that it is a work in progress :S A work in progress with results! I have started a vlog, and so far I’ve shot, edited and uploaded eight (8) full episodes! Although none are public, at least at the moment. And the latest three aren’t even on Youtube because I haven’t dared to yet purchase a music licensing subscription. I’ve also been getting back to photography and started an Instagram account and been trying to post my “best” pictures there. It is a rather random collection of pictures without any great theme, so getting any real followers will probably be rather impossible. Not that they appear without “marketing”... But that is not important. Right? Right??

Oh, and I also bought a house. Wow. So much space. “Oispa tilaa” (“[I wish I had] more space”) was already becoming quite a catchphrase for me. So much space!! This, of course, wasn’t a decision done lightly and took considerable time investment, too. And furnishing the whole house is an ongoing process. More time sinks! I’m unable to describe this in more detail currently. But I’ve talked about the house in the vlog a bit, and will talk more later. Maybe even a blog post, but don’t try to push it.

And last but certainly not the least: my very hard work really paid off, and I finally found a relationship in January! This was and is a huge time sink, but I couldn’t be happier! But this is all you need to know about that.

For the next year I hope to get more free time which I can spend on gaming, code, tv / movies and some other ambitions, like more camera and social stuff. Maybe music. And other™ things! Remains to be seen how it will turn out.

And the answer for the topic? You decide.

PS. I’m currently dying trying to figure out a replacement for my five year old Nexus 5 which has been slowly dying over the past few years. The time for replacing is really close, but I just can’t find single satisfactory device on the market!

Getting a life?

I've just been so very busy with everything.

So, first off: work. I've continued working my day-job almost full-time, and that has reflected on the amount of free time I have. On the weekdays I never have the time nor the energy for anything productive, and weekends must be shared with everything else.

Despite all that I got some great work done on Tracker v3 earlier this year. I should write a separate blog post about it, as it is a big topic. But in short I ended up writing my own schema and code generation instead of FlatBuffers. Now initially the messages are serialized as JSON, but I have plans to move to a more effective format when I get everything else done. I also spent time figuring out an actor system called Akka.Net, which I also used for networking initially. Akka can be thought as a message processing framework. Unfortunately the networked performance of Akka isn't very good, and the message-based communication seemed to get too complex when combined with (potentially branching) asynchronous communication. I had a good prototype, but moved on to a "simpler" custom service and networking implementation focusing on request-response model. I'm basing the networking code on Tracker v2 and made some good progress, but it is still in progress. My biggest challenge currently is that I would have to decide on what kind of internal architecture I want to support, and have A LOT of open questions and pending work (oh god, will it ever be done???):
  • Support service instance sharing?
  • Other ways of sharing data?
  • Service instance pooling, for example for database workers?
  • Task-based or not Task-based?
  • How to best support asynchronity, while keeping good performance?
  • Message queueing and buffering? Optimized data structures and piping code.
  • Back-pressure?
  • Am I really going to write my Master's Thesis on this??
And as if this isn't enough, I've been FINALLY trying to up my social life. And I can say without a doubt that this has taken a considerable amount of my energy. Without giving too much back. Depressing. I've made some new friendish connections, but nothing more than that.

And - again - as if I already didn't have enough things to do, I also tried making some music, as I couldn't resist getting a Novation Launchpad Pro. Another reason was that I was going to make all kinds of non-music stuff with it, but so far that has limited to making a rather simple Guild Wars 2 key mapping for it, for playing a music-themed minigame. Haven't done anything new with it lately.

Then there is also another time sink. I've wanted a good video camera for a long long time for various reasons, so I finally acted on that. Got a GoPro Hero 5 and the Karma Grip. Was going to take them to a holiday trip, but the delivery dragged on and on. So no holiday video. I shot some test material with, watched a lot more in Youtube, and in the end got bit by a camera-bug. This infestation caused me to upgrade the GoPro to a Panasonic Lumix GH5 with a 20mm f1.7 lens (H-H020, the only lens that was available to ship immediately). I've had the camera for about a month now, and am in the process of getting a 12-35mm f2.8 lens (H-HSA12035), because the camera has amazing video quality, and I'd like to use it more than the 20mm allows. The only weakest point is the autofocus. But despite this I have yet again a new hobby: vlogging. I already shot and edited the pilot-episode (cooked some food) on one Saturday (took almost the whole day!).

The episode turned out quite ok, but I'm still debating whether I dare to publish it, even after some tweaks. The thing is, that I've continued the self-discovery catalyzed by Life is Strange, and that video shows more of the unfiltered and buried personality of mine. I feel I'm not yet ready to expose that to the world. But even if I never publish that - or any other episode - shooting the video was a fun and therapeutic experience. The editing not that much .P

So, a lot has happened, and will continue to happen, but at a lot slower pace. At least when it comes to programming projects. I feel great sadness about this, but I guess this is called getting a life?

The dawn of Tracker v2

Why: Gotta go fast! And besides, reinventing the wheel is the best way to build an understanding of it.

What: C# RPC / PubSub server. Clients in C# and Javascript. About 400k requests per second per core, with just a single client connection.

How: .NET Core. FlatBuffers. Persistent connections. Code generation.


So… For the past month or so I’ve been working on an improved server for Tracker v1. Tracker v1 is a project I’ve worked on for almost two years now, though there has only been few bigger sprints, rest of the time it’s been just running nicely. Unfortunately I’ve been unable to find the time to properly write about it, so maybe this’ll have to do.

As the name might suggest, Tracker is a system for realtime tracking of connected devices. The server is contained in a single Python process(and a MySQL database). There exists some additional tools, and also the tracking client for modern Android phones. The trackable devices use short, encrypted UDP-datagrams for sending the position data to the server. This position data is then relayed to connected clients via WebSockets, and then overlaid on a map.

The data is also saved to database, so that in can later be analyzed to trips(which are just trip updates clustered using time gaps), and viewed using the web interface. The web interface also includes user management. The Android client is configured by reading a QR-code generated using the web interface. The code includes the server address, device identity and encryption key.

The system is designed so that the devices do not require any return channel to the server, so that they could, in theory, also be used over one-way radio links. Latest addition for the UDP-protocol includes an optional reply message, though. Without, it would require extra effort to verify the connectivity and the use of a recent enough encryption key and packet id. I’d like to avoid using TCP in the client, but there are some supporting functions for authenticating the web interface, and updating trip description.


V1 works just fine, is stable, and somewhat feature-complete. So why the need for a new version? Because I have BIG PLANS. While I haven’t bothered to benchmark the current version(but I really should), I’m quite certain, that it will not work with 100k+ or 1M+ users. V1 just doesn’t scale. The server is just a single process, running on a single core. While there are some components that could be split, the fact still remains. It just doesn’t scale.

So, learning from the past we can see that there are a lot of things we can do better. Also, this is a great opportunity to do some really exciting high-scalability stuff!

A new approach

Aaand I already forgot everything, and initially planned and prototyped a new monolithic architecture. Unfortunately the monolith would have required all the business logic to be written in C/C++ (and preferably running on a bare-metal unikernel) to reach adequate performance, and still there wouldn’t have been any guarantees on the level of performance. It would also be a single point of failure: when it would fail, it would be messy.

So, let’s go the other direction this time, for real. In v1 there already was some momentum in the direction of separate services. The geospatial queries were offloaded to a stand-alone microservice, and there were plans for moving the UDP handling and decryption to a separate process. So, I’m now proposing an all-new microservice-inspired architecture, where many of the tasks are running with only minimal inter-service dependencies. This way the load can be spread to multiple machines, and maybe, just maybe, there’ll be a way to make the system more resilient to outages in individual services.

But how about the clients. They could, in theory, communicate directly with individual services, but user authentication, service discovery and security go so much smoother if there is only one or two endpoints the clients connect. These connection points would then pass the messages to relevant internal services.

And this, dear reader, is what this post is all about.

One proxy to rule them all

This client communication endpoint should thereby be able to transmit - and possibly translate - message from clients to the internal services, and vice versa. And because it would be extremely wasteful to open new internal connection for each client, the communication should be handled using a message bus of some sort. 

Most of the messages the endpoint can just directly proxy, but for others it needs to have some intelligence of it’s own. It should be able to enrich the requests using the user identifier, and thereby also handle the user authentication, at least on some level.

To keep the proxy simple, user authentication should be the only state it contains (and maybe some subscription state, so that the proxy can subscribe only once for each topic). This allows for running multiple proxies at the same time, evenly handling the client requests. A separate load balancer is thereby not required.

(In ideal case the proxy would also handle service discovery, failure detection, and automatic failover. As part of this mechanism, the proxy could also - if it doesn’t make it too complicated - be the primary location to set feature flags. Feature flags are toggles than the system administrator can set to disable parts of the system even if no faults are present. The flags could, for example, set some internal service read-only, or disable it altogether. Rest of the features will then continue working, if they do not require access to that service. For example, the user service could be disabled for maintenance, but all existing authenticated connections would continue working. This, though, gets more complicated if there are dependencies between the services.)

And this proxy is what I’ve been working on now.

One serialization format to bind them all

I place my bets on a strongly structured binary protocol that can be read without additional copy operations. (Just like I’m now switching from the weakly typed Python to strongly typed C#. Strong typing is very useful in eliminating many accidental mistakes when typing identifier names etc.) One such strongly typed serialization protocol is FlatBuffers(made by Google). It is much like Protocol Buffers(also made by Google): the message format is defined using a schema file, and then a code generator is run, producing the strongly typed binding to manipulate the messages. The message format supports protocol evolution, meaning that new fields can be added, and messages still stay readable for older and newer clients. Not a very important aspect in a system of this size, especially when all the parts are controlled by a single party, but it’s kinda nice to have.
table RequestSum { a:int; b:int; } table ReplySum{ sum:int; }
Listing 2. Example service.

As mentioned, the cool thing with FlatBuffers is that they are extremely fast to read, only about few times slower than accessing raw structs(due to the vtable-based offset-system required to support the compatibility). No additional processing is required to access the data, and building the messages is equally straightforward, requiring no additional allocations in addition to the single builder pool allocation.

And one extra complexity layer to unite them all

Like many other serialization formats, FlatBuffers doesn’t know anything about the concept of RPC. Because of this, I made my own layer on top. A service file defines named remote methods, that take a specific message type as an argument, and return another type. The message types themselves are defined in the FlatBuffers schema file.

service SomeApi { sum: RequestSum -> ReplySum; sub: RequestSub -> void; pub: RequestPub -> void; }
Listing 2. Example service.

After a service has been defined, few code generating tools are run to generate the template code for the server, and to define an interface the clients can invoke to make calls to the service. This code generation step also creates the identifiers that tie message names to actual protocol level identifiers. These identifiers are generated for events and errors too, not just the requests and responses mentioned in the example. Those two do not require extra definitions in the service file (at least not for the moment, it might be nice to explicitly define them, too).

And it works

It was quite an effort, but preliminary testing gives very nice performance numbers. C# server, running on .NET Core RC2. TCP, minimal framing protocol for messages. Pipelining. A single Intel Core i7-4790k server core can handle about 400 000 request/response pairs per second. I’ve yet to test this using multiple clients, but I have high hopes. Those hopes might get shattered though, as only one request is being executed at once (per connection). This of course not a problem if all the operations happen in-memory, but throw even 1ms of IO latency there, and request rate drops to 1000/s…

The plan for the future is to - obviously - solve that little problem, and then to clean up code generation, and tidy up rest of the code, improve the tester, and then maybe finally get to writing some business logic.

Configuring HAProxy 1.4 to do host-based reverse proxying

Foreword: the new(er) HAProxy 1.5 supports map-based hosts, which are the recommended way. But here’s a guide for those of us who are stuck with the older version.

So, you’ve got several application servers, with ports all over the place. How to organize this mess to be accessible more easily? By using DNS and a reverse proxy that is aware of the Host-header. HAProxy is perfect for this. High performance and low footprint. As a bonus HAProxy can also be configured to terminate HTTPS requests so that even your dumbest services can benefit from encryption!

For setting up the proxying, here’s a handy little list:

  1. Install; on Debian, run apt-get install haproxy
  2. Configure; take a look at this paste and copy the contents to /etc/haproxy/haproxy.cfg
  3. Enable; to mark that you’ve actually edited the configuration, go to /etc/default/haproxy and set ENABLED=1.
  4. Run; service haproxy restart.

And that is it! Now you have a basic HAProxy installation that reverse proxies requests to two different hosts/ports based on the Host-header. Simply add more backends and acl/use_proxy combos to introduce new services.

But the fun doesn’t end here! Now you have a bunch of backend servers whose requests are all originating from a simple host, and that breaks ACLs and logging and everything! To fix this you’ll have to go manually through each and every service and make the necessary configurations.

For example, to make HFS to trust the X-Forwarded-From header set by HAProxy, you’ll have to edit its configuration file manually, as per this guide.

For Apache there exists a whole module for this: mod_remoteip. Simply include the module and set RemoteIPHeader X-Forwarded-For and RemoteIPInternalProxy proxy_ip_here. You may also need to change %h to %a in LogFormat to get the logging to work correctly.

No matter what you are using, the common thing is to mark your proxy machine as trusted, so that the real remote IP can be read from the header. Be aware that the header contains a comma separated list of proxies(or just multiple consecutive headers of the same name), and the last one is your proxy. The rest can be freely set by the client, and can not be trusted.

On 'Life is Strange' – part II

Wowzers. What a journey.

This post ended up taking a lot longer to write and in the process turned up a lot longer than I first anticipated. And I still feel this everything that could be said(and nor was the previous part).

Most of this was written in the immediate days after the release of the final episode and most of the editing was done by the end of that week. Afterwards it still took few extra weeks to recover enough to even be able to get everything together and add some missing observations.

But to get on with this: first some initial thoughts about the final episode. I’ll try to be vague; there shouldn’t be any spoilers. These thoughts are complemented with some self-reflection.

Episode 5

While the final episode was enjoyable, it lacked some of the magic all the previous episodes had. I don’t know if it was because of the fact that I wasn’t still done processing the previous episodes, or the fact that I had so much other stuff distracting me in real life. In any case I felt slightly disconnected.

Or maybe it was the fact that the episode was more action-oriented where the previous episodes were more dialog-oriented. Also, with so many different locations and quick transitions between them the episode felt a bit rushed. But then again, it’s also about how you view the thing. Squeezing together tons of different fan theories(intentional or not) and sensibly finishing a time travel story is definitely not easy.

There exists a variety of arguments to be made for and against the final episode and especially the endings. The disparity of polish between the endings was quite disappointing and a lot was left to be desired. And the overall feeling of sadness about the end of this all is completely another matter...

After watching the credits I still had to spend maybe five minutes just staring at the main menu listening to the music, not really comprehending what had happened nor that the game really was over.

Craving for closure

As above - and like I so subtly hinted in the previous part - Life is Strange touched me with an unexpected intensity. Partly because of the game itself(the story, the characters and the atmosphere) and partly because of how it led to some pretty major self-reflection. First about the game itself.

There is so much I want to say.
So much emotion.
So many thoughts.
So much everything.
And while that everything was.. everything, it also was almost too much.

And now the end is here and I feel empty. The closure wasn’t what I was expecting, nor was it was I was hoping. Instead, it was what I needed.

While the endings left A LOT open, they also had an adequate amount of closure to keep me from totally collapsing. This allowed me to limp to the game’s reddit community, where the feelings could be shared. Thank you all. In addition, after I had played the episodes 3&4 that I discussed in the previous post, I listened to PSNStores podcasts about those episodes. This helped me a great deal in processing what happened with those episodes.

The more time I spent reading reddit and watching interviews, the better I finally felt. Now that I’m writing this particular paragraph weeks later, I’m almost completely at ease with everything. I’ve had time to research how the final episodes, and in particular the ending, is supposed to be understood.

* * *

As the game’s developers have told in many occasions, the game was about the personal growth of Max. A nostalgic coming-of-age story. This is a crucial cornerstone to understand. The relations between characters were crafted to be so perfect and special. For example Chloe was crafted to be THE perfect friend with a deep emotional connection with Max / the player, evoking a longing for such a person in real life. But real life does not work this way. It just doesn’t.

* * *

To adequately process coming-of-age stories, there needs to be some reflection on one’s own life. This is the part that most definitely changed me. Some details below, but the gist of it is that while affecting (at least in the short term) on how I see the world, this experience also made me realize certain rather grander / fundamental things about life. Life is so weird.

The most immediate realization from this whole experience: everything will come to an end and there is nothing you can do. Ends have to be endured. It’s hard to endure everything alone, and for that you need someone, or someones. (In this case primarily /r/LifeIsStrange and the podcasts. I also had some friends, but their role was just to be an audience while I announced how this game had had such an impact on me. But that helped, too.)

In the end you’ll feel weird and dull, but also oddly at ease: there is nothing you can do now. I’ll never forget the journey, or how it helped me grow.

Transforming life

As the game and the setting were so greatly crafted, it was really easy to actually become Max, not just be someone who control’s her avatar. Not many games can accomplish this. Almost without noticing it I had slipped to be in the wonderful nostalgia-colored teenage-life of Maxine Caulfield.

This glimpse to another life. Life of an adorable, slightly geeky girl who likes photography and innocently loves to observe the world. But you can’t change your life just like that; I am not Max, nor is her life mine. No matter how much I hoped to be Max, it was not going to happen. But you can try to slowly change yourself.

My immediate reaction to this was of course to try and be more like Max, try to observe the world with that same kind of non-judging, all-seeing way. But it’s not that easy. While being bit of a stretch, I do have moments when I feel emotions somewhat comparable to hers. Not everything is perfect, but I’m pretty good in what I do and how I have my future planned. I do have the occasional moments of feeling great in life. While not happening too often, I’ve also had some good moments with friends. I should just embrace who I am, no matter what.

And you don’t actually need to have an opinion on everything, just keep an open view on the world. Don’t just plod through everything without taking a moment to appreciate what you are doing.

Do this and maybe you’ll end up more like Max. More like a better person. And don’t try to necessarily change the world, change your view on it.

Emotional layers

Having continued on this path of self-reflection were are now arriving at the very core.

This experience has finally had me realize that there is multiple layers(or segments or whatever to call them) of me. Sure, layering is a known psychological theory, but I didn’t realize just how accurate it was and that I too implemented it. There are those layers I show at work or when studying. I know it’s necessary to have some emotional separation, but that also makes be feel incomplete. And then there is that one layer at the core was affected by all this. Maybe that is the real me?

I’ve been under a lot of stress this semester, and as a coping mechanism I’ve had to segment myself to multiple distinct though-spheres(wtf is that word). Sure there is some crosstalk, but it has stayed low. While this has helped me to focus on the task at hand, I’ve began to feel the wearing effects of maintaining that emotional isolation.

There’s always been those segments, but lately they have been even more isolated. The pressure building up.

The outer layer is divided to two distinct things. There is one me for studying and another one for work. Protected by those there is the normal me for friends, gaming and living in general. But that is not everything. There have been occasional hints about an isolated layer below, but nothing really concrete.

But now this game pierced through all those layers and exposed that very core underneath, the ‘real me’ - or at least as real as it can get. There was a reason that core was isolated. It’s sensitive. And this game was . It cracked that isolation up. I’m in ruins. I’ve tried to keep everything from imploding, but it has not been easy.

Maybe the game was an escape?

Total(ish) immersion, or whatever?..

* * *

I’m actually having difficulty finalizing this section, as that would mean I accept all this.

Where has the time gone?

I don’t know how I would have fared had I not had an almost perfectly timed semester break this week. I still went to work, but didn’t have to worry about exercises and lectures. Instead I had time to focus on all of this: process everything(or as much as I could/can) and stumble for closure.

Like a comment in reddit said, it makes no logical sense for a video game or fictional characters to evoke this much emotion. But this is art, and art is supposed to have some type of an effect.

I wish I could stay in this moment forever. But then, I guess it wouldn't be a moment.

On 'Life is Strange'

A story first, skip ahead for the actual review.

Not long after the game's initial release I picked up the first episode and was immediately hooked. Played through it in a weekend, and then for a second time with different choices.

I immersed myself completely into the world and story, and it was intense. Couldn't even think about playing another episode for a whole week. It actually took well over a month before I could play the second episode.

And the second episode was even better. Now I had to take an even longer break.

When I finally resumed playing, both ep3 and ep4 were out, and ep5 just two weeks away. I tried to pace myself by just playing ep3 during one weekend, and then ep4 the next.

I failed. Ended up playing ep3 in a single day. And because it ended with such a cliffhanger I just had to play ep4 the next day.

That was a dire mistake. Now I'm broken and feel empty and hollow. Couldn't even function properly for the rest of the day(or the next).

* * *

This game is larger than life. The game's protagonist is a photographer, and through her eyes it is seen how vibrant and colorful the world actually is. The atmosphere is truly captivating and full of wonder, and the plot something unexpected.

It took a long time to quantify, but I finally figured out why the game resonated so strongly with me. My life is quite dull and boring, and immersing myself completely to the game world and its characters allowed me to break free from that grayness, and experience the full spectrum of the shades that is life.

There is also the role playing aspect. I'm normally not that social. I stick to the routine and am quite cautious on trying out new things. But the game's protagonist is social. Routine is broken by the unfolding events and that leads to trying out new things. Even the character interactions allow for experimentation thanks to the rewind ability.

This all is so much more than the gray ordinariness of (my) real life. The withdrawals from stopping playing are real and hit me hard. Combine this with an awesome plot that you can influence in a real way. Add a setting that allows to partly (re?)live what I kinda missed growing up. And finally add the very likeable protagonist, a great selection of songs and the very fitting and beautiful graphics(and not a single problem with performance).

6/5. Will play again - when I recover.

(Also, there was a great opinion piece on PCGamer by Jody Macgregor, I highly recommend reading it.)

Plotting GPS data

Sometimes going out for just a walk isn’t that easy and some extra motivation is needed. Luckily I had just that extra: going for a walk allowed me to get some rather important real-world data for the GPS tracking service I have been working for quite some time.

During those walks I had the idea to further use the recorded data. The forest was filled with paths and I thought it’d be great to map those. And maybe even have some kind of heatmap of the most traveled routes!
Work, studies, gaming and general procrastination kept me busy, but here it finally is:

Investigating TCP timeouts

As hinted by an earlier post, one of my latest work projects was a building a WebRTC based video streaming system. This system features a Websocket backend server handling the client sessions and signaling, written in Python with Gevent. My co-worker was performing some testing and everything seemed to be working just fine. Except one point he encountered a situation, where the server insisted another client was already connected, when it clearly wasn’t. Even netstat said that the TCP connection was ‘established’.

Some Websocket client connections were arbitrarily staying open even if the browser was closed! I had just added some hooks to the server to better detect disconnected / timed-out clients and a good guess was that I had messed something up. Nope. Debugged the thing for hours but couldn’t find single bug.

That is, until I tried toggling network connections. Being a system targeted for mobile devices, one facet of testing is to check how well it works with different network connections. If the network connection was turned of while the client was connected, the server couldn’t detect that until after about ten to fifteen minutes, even though it was sending keep-alive packets every 20 seconds. Strange indeed.

But maybe it wasn’t, maybe the write call was blocking the specific greenlet? That is an easy thing to test, just dump some stack traces. But nope again. How about if I run the server via strace and try to spot the writes there? It took bit of an effort, but the strace output revealed that the write calls were performed just fine! This is starting to be most troubling…

But then a revelation; write returned EPIPE. After quite a bit of research I had finally found the reason for this behavior: TCP timeouts. Turning off the network connection really did what it did. It turned off the connection without giving the protocol stack time to say its goodbyes. The server then though the client just had a really bad connection and tried resending the queued data with an exponential delay back off per TCP spec. My math didn’t quite match, but in an old thread the total timeout was calculated to be 924.6 seconds with the default TCP parameters. This was quite close to the actual timeout observed with the server.

* * *

I sighed and changed the keep-alive protocol so that timely replies were required instead of just relying on failed writes. Now it works beautifully.

Tl;dr: TCP was just really hard trying to resend data after it detected packet loss, only giving up when about fifteen whole minutes had passed.