I graduated

1.5 months ago. I guess it should feel like something? Clearly not, as I've been in no hurry to blog about it in more depth. And it only took 8.5 years :d

That is also probably the reason. I've been working almost full-time for about 4.5 years now, and my studies were already somewhat complete even when starting to work. During that time they just progressed slowly and sluggishly, but relatively surely. The two larger challenges were passing swedish and then of course the thesis. Everything has just been one large marathon, with the goal of just reaching the finish line.

And now I did. Sure, it's an acomplishment, but is it really anything that special? The goal was always there and reaching it didn't really come as a surprise; there was no great and overwhelming feeling reaching it. The race just ended, and there was little more to it than that.

* * *

But actually, there is a bit more. In order to get to that let's talk a bit about the thesis itself. Like my Bachellor's, Master's was about something that I was researching at the time. For many years I've been interested in backend development. Now everything's finally giving so much return of interest functionality-wise that it's worth to invest in deployment. What good is it to be able to write backends with only little effort when making them available and easily accessible is uncharted territory.

So for my thesis I wanted to take a look into the process that went to creating backends in a way that made them easier to deploy. I didn't want to chomp on a piece too big, so I purposefully didn't seek an automated deployment pipeline. I knew better than that. But the real friends are the ones we made along the way, so buckle up ;)

I had previously written a simple restaurant menu parser for my Nokia e51 in PHP. When the page was requested, the server fetched the menus from preset restaurants and rendered a slimmed down HTML-only page with just the menu contents. This was very advantageous compared to opening the site(s) of the restaurants manually and waiting for needless data transfer over a slow mobile connection, and then the slow rendering of complex content and scripts on the low-power mobile device.

Anyway. When the parser was in need of an upgrade I ported it to this new thing called .NET Core and hosted it on a now-retired iteration of usva. Then at one point I also dockerized it and made it run on Azure. This was done because that was the only application running on usva, and now I could do other things with the server (for example turn it off :p). But anyway. That was the first step on the journey of improved hosting.

Several years passed till the thesis. I needed an application to use as the base for it, and after making a list of possibilities the menu parser was kinda the only reasonable choice. Other alternatives would have been far too work-intensive in their implemenation itself. I needed a tried concept so that I could focus on just researching deployment and hosting (and common design things). But of course I couldn't not upgrade the application, so for the thesis I took the menu parser logic of the old system, and wrote a whole new site, including a history database for the foods and even some user-specific stuff as an example. I also added an admin API and some other features such as centralized structured logging.

While the academic goal was to present some design guidelines for containerized applications, the pratical goal was to create the basic building blocks that could be later used for more involved DevOps concepts, namely build and deployment automation. At the start I was not sure how these would end up looking, or even if I'd ever get to see them.

But ultimately I ended up with something I can be quite satisfied with! I could use the Dockerfile to automate building the application, and then further use Docker compose to define the runtime: namely the exposed ports, configuration and the whole set of services (application, updater-parser, database and log storage). What still remains to be improved is how the database is used, as there remained some manual steps for database migrations. But for everything else the process was automated quite far.

The goal was to simplify hosting and deployment, and it really did get a lot simpler. Just one command to build the app, another to copy it over to the target server, and one final command to update the running instances. But what is even more imporant is that it is the same three commands no matter the app. This opens the way to efficiently build automation in the future by replacing just those commands. But it also makes the manual work easier.

* * *

The work done on the thesis is just a part of a larger continuum about backend development. While the thesis (and graduation in general) is a great step, the work never ends. And I'm already busy with the "next" steps - as indicated by the surprisingly many blog posts as of late. There is always so much more work to be done to get better. This same train of thought extends to my other hobbies, too. In the shadow of the continuum, there just doesn't seem to be any time to feel the pride and acomplishment of the past.

At least alerting works...

I've also done some work in setting up infrastructure monitoring. There's still work to do - and do better - but at least I have one alert. And it works.

I think I might have just been thinking of doing something more leisurely, but Grafana sent me a Telegram message that there was something wrong with NATS response times. I open the link and see that there is no data, meaning that the instance is probably down. But there's also no any other data. Fuck. Everything going up in flames this soon?

But wait a sec. There is data, momentarily. Then it disappears again. I bring up logs for InfluxDB and and see an error "panic: keys must be added in sorted order". I spend quite a while trying to figure out what exactly is wrong and how to proceed, almost giving up. It seems that lot of the tooling for fixing and managing the files has been removed or made internal-only. But then I find an up-to-date guide for rebuilding the index and decide to try it.

Because my installation is dockerized, and there seems to be some issues with the rebuild command, I had to chown the data directory to "some user", and then run the repair command, and then chown the files back. And yay! It works again. For reference, the docker command I used: docker run --rm --user 1000 -v /path/to/influxdb-data/:/data influxdb:1.7.9 influx_inspect buildtsi -v -datadir /data/data -waldir /data/wal

At least least it works again. But just as I though everything was going nicely... Maybe the problem is the server itself? It served as my desktop earlier, but I moved away from it due to constant crashes with GTA V, and much more rare crashes other times. Maybe I have to invest in some proper hardware :o We'll see. Maybe it'll work again without issue for a long time. Pls :s

Testing the stack with CircuitPython


Like I stated earlier, one of the reasons why I’ve now been so much about improving my stack is the fact that I backed Meadow F7 a while back. I kinda want to maximize my productivity with it, so I’ve been doing what I can beforehand (and while the mood lasts).
I already talked how this work included setting up Grafana and other data collection facilitations. I also ‘teased’ about using NATS for registering some long-running ad-hoc jobs. I’ve been calling the NATS-based thing Sumu. More about it later. But anyway, now these were put to a somewhat unexpected test when I ordered bunch of preparatory stuff from Adafruit and got a Circuit Playground Express as a freebie.
I’m trying to keep this post short, so I’ll just state that it can’t really be any simpler to get some samples running with it. Basic documentation is very thorough and there’s a bunch of sample code available, and with a bunch of sensors included in the board itself. As kind of an embedded Hello World I moved to the embedded temperature sensor after blinking a led and playing some drum samples.
The code on the device reads the temperature once a second, and prints the averaged value every 10 seconds over the serial console. On PC I have a LinqPad script registering itself to Sumu (with a health check), reading the serial console, and pushing data to Postgres while also serving the current (or next) temperature value via Sumu to browsers (via Node-Red). There’s even error handling and retrying in case the serial console gets disconnected for some reason (for example the device is unplugged). All this in 180 lines (with majority of it being serial console stuff :p) and maybe an hour or two! Feeling good about things!

Let’s just hope that this is just the beginning, and not the peak :s

How to create a Nuget package


Like mentioned in a previous blog post, I’ve been looking into creating and hosting Nuget packages (the package management system for .NET). While there still is a lot of things I don’t yet have experience with, I feel confident enough to share the basics of it. But mostly because now I have the process written down for me, myself and I

Start by creating a new class library with dotnet new classlib and implement whatever the library should do. At least for simpler libraries there really isn’t any Nuget-specific considerations. You can also start by installing the nuget-binary somewhere in your PATH.

After the implementation is done, it is time to add the Nuget-specific metadata. This information can be specified in two ways. Either by independent nuspec-files, or via the csproj-file on the actual project. I tried to use just the project files, but ultimately this proved to be a problem due to limitations of the Nuget CLI. If a project references another, and that other project doesn’t have a nuspec-file, then it isn’t added as a dependency, but instead the binary is directly included in the first package. So, in the end I though it was easier to just define all the metadata in a single file, the nuspec-file. Here is an example:



This file should be named similarly to the package id and project file. So in this case Dea.WebApi.AspNetCore.nuspec and Dea.WebApi.AspNetCore.csproj. To generate the package containing the binary of the project and all dependencies correctly marked, use a command in form of nuget pack -Build -IncludeReferencedProjects Dea.WebApi.AspNetCore.csproj. Even though the argument is the csproj-file, the nuspec-file is also automatically used, as it is similarly named. This also applies to the linked projects and their nuspec-files. If you want to make debugging easier, you can also add -Symbols flag to the command.

Pushing this package to a repository is easy if you already have one set up (like described in the previous post): nuget push -Source DeaAzure -ApiKey az Dea.WebApi.AspNetCore.0.2.1.symbols.nupkg

I was going to rant here about how Nuget-package and Azure Artifactory break the debugging experience as there really isn’t an easy way to get the symbols working. But it turns out that everything supports them after all. It was just a matter of including them, and being sure to change the format from Portable to Full. Changing the PDB-format is usually one of the first steps I make when creating a new project, but for some reason forgot to do it this time. Now everything just works :) Though I would have preferred to have the symbols hosted separately so that not everyone who has access to the packages has access to the symbols, too. Not that it really matters, as I probably have to use Debug-builds anyway, and it is so easy to decompile the files back to source. And maybe some packages even have the source code available anyway...

But this is actually one great unknown. Is there a way to nicely host (and consume!) different flavors of packages? One with an optimized release-build, and another one in debug mode with all the symbols included. A quick search didn’t reveal a simple solution for this. Maybe we’ll never know.

Edit: also see the addentum about versioning.

Venting about Android development and push notifications


As part of the ongoing effort to modernize my technology stack, the question on notifications constantly surfaces. Today the de facto solution for them on Android is to use Firebase Cloud Messaging (FCM). As I’m trying to use C# for everything I’ll be using Xamarin.Android. Microsoft has relatively good documentation on FCM, so I won’t be repeating it. Instead, this post is primarily about venting about Android development, even after Xamarin makes slightly more bearable. This post it not to be taken as general indication of, well, anything.

* * *

So. I create a new basic Android application. I try to register a new project in Firebase and add my server as a client application, but Google’s documentation is out-of-sync, so I don’t find what I’m looking for. I finally manage so stumble to the right page and can get to work.

I create a scaffolding for my notifications server and try to send the device token to it from the mobile application. It fails. The documentation uses a deprecated API, but that’s not it. Okay. I’ll just look at the logs. But hah, fuck that. I didn’t start the application via the debugger, so I don’t get to filter the logs by application. I have to use the command-line version of logcat, and instead I’m flooded with messages from AfterImageCompositionService and touch debugger and friends. I can’t even filter the logs sensibly by my log tags, because they were changed years go to be limited to a something like 23 characters. But I finally find the error: I can’t use plaintext HTTP anymore on Android by default. Not even on local network! The “correct” way would seem to define some kind of complex security policy, so I just go for the alternative of slapping usesCleartextTraffic="true" to the manifest.

And things work, yay. I can send notifications, and they appear on the phone without even having to have the application running! But then, as stated in the documentation, I try to send them when the application is running. And they won’t work. So I have to manually construct the notification (though, as stated in documentation), and hope that it matches is appearance to the “built-in” one.

And then the documentation starts to fall apart. Starting with just minor things, like not stating that icons should have an alpha channel, or otherwise they appear as blank squares (docs were written for older Android). Then larger things, like when with the otherwise very clear testing instructions there are suddenly no instructions to test those foreground notifications. Well, that is because they won’t work if just following the documentation. And by now I’ve already forgotten what I had to do to somewhat fix them.

And testing out the code creating the notification needs few more iterations than I’d be comfortable. Building, deploying and running the application takes actually quite a bit of time, even when I have the relevant acceleration settings enabled. That is, using shared runtime and only deploying changed modules of the application. It also bugs me that the shortcut for the application keeps occasionally disappearing from my launcher. Then I finally look into this more, and find out that Xamarin, at least on Visual Studio 2019, doesn’t honour those settings, and just uninstalls and install the application every fucking time.

But oh, it gets better. As I try to test both background and foreground notifications, I have to occasionally close the app so that it is no longer running. I do this by pressing the “Close all” button on my Samsung phone. And the notifications won’t work. Little did I know that there was some OEM-fuckery at work. Apparently closing the application this way is the same as killing it (at least as far as OS state-keeping is concerned). This sets a special flag indicating that the app was killed, and the OS-level Firebase service won’t deliver messages to applications that have been killed. vittumitäpaskaa

After I’ve regained my composure I move to the next feature on the list. Showing an image attached to the notification. The documentation states that this is as simple as setting the image URL in the notification payload to a valid HTTPS image. Firebase console even shows a preview of the image!

The documentation was wrong. I spend one full day trying to get the image to work, and it still doesn’t work. I tried doing it in so many different ways, via both generic and Android-specific settings, via the old and the new Firebase API, via managed code and by hand, via my server or directly, or via the Firebase console. Nothing works. I’m starting to suspect that might be Xamarin’s fault at this point. Couldn’t really find any documentation confirming or denying it. Couldn’t even find any documentation how the Firebase library is supposed to work at application-level on plain Android. Is it really operating-system level, or is it just some code included by the library to the application, and then it really is the application showing the notification even when it wasn’t running? And maybe with Xamarin no-one bothered to implement the code for showing the image. Maybe I’ll never know. I could create a new Android Java application, but I really can’t be bothered to do it… Thanks to C#, I’ve started to hate the clunky syntax of Java and other tooling more and more.

So many unnecessary hardships for trying to get even the basic application working… Next is the fun of trying to keep the device’s Firebase token in sync with the server. Apparently, it can occasionally change, and must be sent to the server again. But what if the network doesn’t work right then? I’ll have to manually build to scaffolding to manually schedule background work and keep it retrying until it works.

And then finally I get to implementing more server-side stuff, like removing the device token from the server should the application get uninstalled. Or delivery notifications! Those would be where things start to really get useful! Maybe I can also start to implement notification storage to the device, so that I can filter notifications by their read-status, and not show notifications that were resent due to network errors. And then maybe implement some kind of caching and stuff, and data messages. Then I’ll probably have to implement downloading and showing attached images myself, but on the other hand then it should finally start working. And then have a nice application that has a GUI list showing both all the past notifications, but also those that are unread. And then the general server-lead notification system can be expanded to span other devices, too. Like lightbulbs! And the add some service health notifications from other services I’ve been planning or implementing. And general ones too, like package or price tracking.

* * *

Package tracking was actually the reason I finally looked into this topic. Matkahuolto didn’t have notifications when new info appeared in tracking, like some other companies do. So, I made them myself. And despite all those hardships, I actually got the notifications and tracking working quite OK. Then the funniest thing happened. Matkahuolto updated their site like an hour after I got everything working :’D Had to make a quick fix to the script. Luckily things got easier, as I didn’t have to parse HTML, and could get JSON instead. Good thing I included error handling to the script. If the crawl failed 10 times in a row it would send an error notification. And now I have the LG OLED65E9 TV I ordered :3 Perhaps more on the HDMI 2.1 4k120 fun of the TV later!

Like I mentioned earlier, I’m looking to expand to using serverless, so that I won’t have to use more error-prone cronjobs and such. I’ve also never ran C# code via cron, and everything has been in Python still. So serverless would enable me to write these in C#. But there seems to be a partial solution for this problem in form on LinqPad. I’ve been longing for a serverless platform that made code as easy to execute as LinqPad. So why not just use it? I even have a Windows PC constantly running. Of course there is the problem of scripts still maybe failing randomly and without any automatic restart, but there exists a partial solution in form of monitoring. I’ve been building some general purpose service and service discovery code on top of NATS, and maybe that solution could be used for that, too. A LinqPad script could register itself to that service discovery / health checking system, and then I’d get notifications if the script failed in some way! Maybe more of that, too, later. Later as always. Well, got to have plans? An endless list of things to do, so that I can never feel satisfied from having done everything.

My own private Nuget feed (featuring packages for simpler web APIs): the brave new world might really be a thing?


Wow, what an unexpectedly productive end-of-the-year this is becoming! A blog post, again! Let’s just hope that this is the new normal, and not just something preceding a less-than-stellar episode.

* * *

Like I’ve begun to outline in the earlier posts, I’ve been rethinking my “digital strategy”. It is a pretty big thing as a whole, and I’m not sure if I even can sufficiently outline all the aspects both leading to it and executing it. But I shall continue on trying to do it. This time I’m focusing on sharing code. Jump to the final section if you want to skip the history lesson and get to the matter at hand.

As some of you might know, I started programming at a very early time, relatively speaking. I was self-taught, and that time PHP was starting to be the hot stuff, whereas JavaScript was just barely supported by widely-used browsers. No-one even imagined DevOps because they were too busy implementing SQL-injection holes to their server-side applications which they manually FTP’d over, or developed directly in production, also known as “the server”. And I guess I was one of those guys.

Then I found Python and game programming. And as everyone should know, there really isn’t that much code sharing in game programming. At least if you are trying to explore things and produce things as fast as possible without too much planning (not that this doesn't to apply to everything else, too). Python was also good for writing some smaller one-off utilities, snippets and scripts for god-knows-what. It could also be adopted to replacing some PHP-scripts. But then I found that the concept of application servers exists, opening up the way for a completely new breed of web applications (with some persistance without a database, also EventStream and WebSockets). There was so much to explore, again.

I was satisfied with Python for a very long time. Then I bumped to Unity3D, where the preferred language is C#. A language much faster than Python, at parts due to strong typing. I wasn’t immediately a fan: it took a few years to affect my other programming projects. After that I finally acknowledged all these benefits, and for about 3-4 years now, I’ve been trying to (and eventually succeeding to) shift from writing all my programs, scripts and snippets in Python, to writing them in C#. This was also catalyzed by that fact that I worked on a multi-year C# project at work. And now I can’t live without strong typing and all the safety it brings. And auto-completion! The largest, and almost only, obstacle was related to the ease of writing smaller programs and scripts. With Python I could just create a new text file, write a few lines, and execute it. Whereas with C# this meant starting up Visual Studio, creating a whole new project, writing the code, compiling it, finding the executable, and then finally executing it. The effort just wasn’t justified for snippets and small programs. But for something larger, like reinventing Tracker once again, or for extract-transform-load tools, it began to make sense. (And games of course, but at this time they didn’t really happen anymore, as I was too focused on server-side.)

Then this all changed almost overnight when I found LinqPad. It provided a platform to write and execute fragments of C# code, without the need to even write them to disk (a bit like with Python’s REPL). This obviously obliterated the obstacle (khihi) I was having, and opened a way to start writing even those tiniest of snippets in C#, while also allowing me to use the “My Extensions” feature of LinqPad together with the default-imported packages to share some code between all these programs. And of course I could just individually reference a DLL in each script.

* * *

Parallel to this is also maybe the fact that I was now starting to get so experienced, that I had already “tried everything” and what was left was to enjoy all this knowledge, and apply it to solving problems :o

With Python it never really got to a point where I “had” to have larger amounts of shared code or needed to package it some way. There were few lesser instances of it, but I was able to solve it in other ways. But now with C# things have continued to evolve, and the need for sharing code between projects, programs and snippets is becoming a reality.

For why this is only happening now, there were multiple aspects at play. As I said, almost everything was always bleeding-edge, and there was no need to version things. In part this was due to the fact that there really wasn’t any actively maintained code actively running. Everything was constant rediscovery, and as such there really wasn’t any code to share. Rather, everything was constantly forked and improved upon without thinking about reuse too much. Good for velocity. Also, as programming really happened on only one computer, code could be shared by just linking other files. Code running elsewhere was either relatively rare, or in maintenance-only mode.

With age and experience this is now changing. Or, well… That, but also the fact that I’ve gotten more interested in smaller and more approachable online-services - services which have the ability to create immediate value. A bit like in the old times! In order to make those services dependable, a more disciplined approach is needed. While Tracker will always have a special place in my heart, there seems to be better things to spend my time on. At the same time another strong reason is that I find myself creating and executing code on multiple computers and platforms, making filesystem-based approaches unsustainable.

* * *

All this has finally pushed things to a point where I should really be looking into building some kind of infrastructure to share code between all these instances. For a time, I leveraged on the ability of LinqPad to reference custom DLLs and was quite happy building per-project utilities with it. But now my development efforts seem to be focused all over one smaller slice of the spectrum, with multiple opportunities to unify code. This is especially important considering the fact that I’m also looking to make it more viable to both create and execute code on multiple environments, be it my laptop, work computer or a server. In my master’s thesis researched on using Docker Compose to simplify running code on servers. As indicated in the earlier blog post, I’m now trying to continue on that path and make that task even simpler by utilizing serverless computing. With both Docker and serverless it becomes somewhat impossible to easily and reliably consume private dependencies from just the filesystem.

Instead, these dependencies should be in some kind of centralized semi-public location. And a private Nuget-repository is just the thing! As outlined in these new blog posts, my idea is to build some kind of small, easily consumable private ecosystem of core features and libraries so that I can easily reference these shared features and not copy-paste code and files all over the place. I started the work of building this library a tiny tiny while ago, but quickly decided that I should go in all the way. I’m not saying it will be easy, but it should certainly be enlightening.

The first set of projects to make it to the repository are those related to building web APIs. While I’ve on many occasions planned on using tools like gRPC, sometimes something simpler just is the way to go. I’m not sure how thoroughly I’ve shared my hatred for REST, but I like something even simpler if going that direction. This set of projects is just that. A common way to call HTTP APIs the way I want. Ultimately this comes down to one single thing: wrapping application exceptions behind a single HTTP status code and handling them from there. How else am I supposed to know if I’m calling the wrong fcking endpoint, or if the specific entity doesn’t happen to exist??! Although, as we all know, writing HTTP libraries is a path to suffering. But maybe one with a narrow focus can succeed?

Anyway, maybe my impostor syndrome gets something out of this. Or not. It seems that everyone else is releasing public packages and open source code. But I am not. But maybe, just maybe, this will eventually change that thing.

* * *

The task of actually setting up a Nuget repository ended up being easier than I’d like to think. As the first reaction I thought about using some kind of self-hosted server for it, as I had had much success with setting up servers for other pieces of infrastructure lately. Unfortunately, the official Nuget.Server-package one was a bit strange, and I wasn’t sure if it even ran without IIS. I also didn’t want to spend too much time searching for alternative open-source implementations. So, I decided to try the cloud this once! Microsoft has a free 2 GB version of Azure Artifacts for hosting not only Nuget-packages, but also packages for Node.js and Python. I decided to test it, and then migrate to a self-hosted one should I need to save on hosting costs in the future.

As I wanted a private Nuget feed, I first created a private project (or team?), and then set up Personal Access Tokens so that I could automatically fetch and push packages with my credentials. To actually use the feed with the Nuget command-line tool, I ended up adjusting my local per-user NuGet.Config file. I think that I also had to first install the Azure Artifacts credential prodiver using a PowerShell script. But anyway. This way I should now be able to use the packages in any new project, without having to explicitly reference the repository and access tokens: nuget sources add -Name DeaAzure -Source "https://pkgs.dev.azure.com/_snip_/index.json" -Username anything -Password PAT_HERE. I had to manually add to feed to LinqPad, but that was just as easy: F4 -> Add NuGet… -> Settings.

For actually creating the packages I just used Google and followed the first (or so) result. This means I wrote a nuspec-files for the target projects, built them manually and then used nuget pack and nuget push. I also wrote a script automating this. But, after I later tried doing the same with a newer version of the nuget CLI I got some warnings. It seems that it is now possible to specify the package details directly via csproj-files. Apparently this should also simplify things a bit: now I manually add the generated DLL to the package, the newer way might do it automatically. It did feel a bit strange to manually how to know how to include the DLLs to the correct directory it the nupkg. I’ll try to blog about the updated version of this process in more detail later (edit: here). That is, after I get to implementing it.

While I was relatively hopeful about this whole thing above (though the last paragraph kinda changed that trend), I’m actually not sure what kind of things I’ll have to do when I want to use the feed with dockerized builds. I might have to bundle the config to each repository. Remains to be seen. It will also be interesting to see how my workflow adapts to both using, but also for releasing these new Nuget-packages. How about debugging and debug symbols? Will there be build automation and tests? How is versioning and version compatibility handled? Automatic updates and update notifications? Telemetry? For a moment everything seemed like it was too easy, I’m glad it didn’t actually end that way! /s

Edit: it seems that at least the debug symbols of my own packages work directly with LINQPad when using Azure Artifacts. For some reason I did have to manually remove the older versions of some of my packages from LINQPad's cache from the disk in order to use the symbols I included in the newer packages of mine (the older ones didn't include symbols, yet the newer one had a different version number). But after that they worked :)

Weekend project: Destiny 2 account tracker (feat. improved metrics infrastructure)


Following the theme set by the previous post, I have continued my pursuit for improved infrastructure. Not that it really is anything special yet. Just more services with almost default config. But the idea here is that these services will form some kind of stable core for many other services to follow, and hopefully evolve over time to become even more dependable. At this point they are to “just be out there” and usable in order to test new ideas.

One of these ideas is about finally upgrading how I might collect timeseries data. Over the years I’ve had several tiny data collection projects, each implementing the storing of the data different ways. I’ve already reinvented the wheel so many times and it is about time to stop it. Or at least try to do it a bit less :p Also, in the previous stage I installed MongoDB, so for this stage I thought that it is about time to also install a relational database, and PostgreSQL has been my absolute favourite on this front for a while now.

Meanwhile, after doing a tiny bit of research on storing timeseries data I found TimescaleDB. And what a coincidence, it is a PostgreSQL extension! I think that we’ll be BFFs! That is, after it supports PostgreSQL 12... I wanted to install the latest version of Postgres so that I get to enjoy whatever new features it has. But mostly because I’d be then avoiding a version upgrade from 11 to 12, had I chosen to install the older version. Not that it would have probably been a big problem. Anyway, the data can be easily stored in a format TimescaleDB would expect, and it shouldn’t balloon to sizes that absolutely require the acceleration structures provided by TimescaleDB before I it is upgraded for 12. Rather, this smaller dataset should be perfectly usable with just a plain PostgreSQL server. Upgrading should be just a matter of installing the extension and running a few commands. Avoiding one upgrade to perform another, oh well…

In the far past I’ve used both custom binary files and text files containing lines JSON to store time series data like hardware or room temperatures. More recently I’ve used SQLite databases to keep track of stored energy and items on a modded Minecraft server (Draconic Evolution Energy Core + AE2, OpenComputers lua script, Dockerized TCP host (there was not enough RAM  in the OC computer to serialize a full JSON in-memory)). I should try to add some pictures if I happen to find them…

For visualizing the data in the past I’ve used either generated Excel sheets or generated JavaScript files with whatever visualization library I found. Not very nice.

* * *

But let’s get to the point, as there was a reason I wanted to improve data collection this time. I finally got around to checking out the API of Destiny 2 in more depth, and built a proof-of-concept of an account tracker.

For those poor souls that don’t know what Destiny 2 is, it is a relatively multifaceted MMOFPS, and I’ve been playing it since I got it from a Humble Monthly (quite a lot :3). As with any MMO, there is a lot to do with almost endless amount of both short- and long-term goals. It made sense to build a tracker of some sorts so that I could feel even more pride and accomplishment for completing them. And in case of some specific goals, in order to maybe even see what strategy works the best, and what doesn’t. The API provides near-realtime statistics on great many things, and it would be nice to also be able to visualize everything at real time.

To accomplish this I needed multiple things: authenticate, get data, store data and lastly visualize the data.

Authentication in the API is via OAuth, so I needed to register my application on Bungie’s API console and set up a redirection URL for my app. After this I could generate a login-link pointing to the authorize endpoint of Bungie’s API. This redirects back to my application, containing a code in the query string. This code can then be posted as form url encoded to Bungie’s token endpoint. This endpoint requires using basic authentication with the app’s client id and secret. After all this the reply contains an access token (valid for one hour) and an url to call in order to refresh the token (valid for a few months, but reset on larger patches). The access token can then be used to call the API for that specific account. This would probably be a great opportunity to opensource some of the code… 

Speaking of which, there already exists some open-source libraries for using the API! I didn’t look into them yet, as I was most unsure about how the authentication would work. I guess I should now take a look.

The process of figuring out how the authentication works contained quite a bit of stumbling in the dark. The documentation wasn’t that clear at all steps, although at least it did exist. On the other hand I’d never really used OAuth before, so there was quite a bit of learning to do.
This also presented one nice opportunity to put all this infrastructure I’m building to good use! As part of the OAuth flow there is the concept of application’s redirection URL, but in case of a script there really isn’t any kind of permanent address for it. So what do? I didn’t yet implement it, but I think that a nice solution for this would be to create a single serverless endpoint for passing the code forward. While I haven’t yet talked about it, I’m planning on using NATS (a pub-sub broker, optional durability) for routing and balancing many kinds of internal traffic. In this case an app could listen to a topic like /reply/well-known/oauth-randomstatehere. When the remote OAuth implementation redirects back to the serverless endpoint, it publishes the code to that topic, and the app received it. All this without the app needing to have a dedicated endpoint! It seems that someone really thought things through when designing OAuth. And as a bonus that code is short-lived, and must only be used once, so it can be safely logged as part of traffic analysis.

Reading game data is just a matter of sending some API requests with the access token from earlier, and then parsing the results. At the moment I am only utilizing a fraction of what the API has to offer, so I can’t really tell much. So at the moment this means the profile components API with components 104,202 and 900. This returns status of account-wide quests and “combat record” counters, which can be used to track weapon catalyst progression. I’m reducing this data to key-value pairs. Each objective has a int64 key called “objectiveHash”, and an another int64 as the value. The same goes for the combat record data. At the moment I'm using a LinqPad script that I start when I start playing, but in the future I'd like to move this to be a microservice. This service could ideally poll some API endpoint to see if I'm online in the game, and only then call the more expensive API methods. Not that it would probably be a problem, but I'd like to be nice.

Data is saved to the PostgreSQL database. I wrote a small shared library abstracting the metrics database queries (and another for general database stuff), so now writing the values is very simple. This shared library could be used for writing other data, too. Like the temperatures and energy amounts I mentioned above. I should probably add better error handling, so that lost connection could be automatically retried without interaction from the code using the library. But anyway, here is how it is used:

var worker = new PsqlWorker(dbConfig); // Lib1
var client = new MetricsGenericClient(worker); //Lib2
var last_progress = /*client.get*/;
// ...
var id = await client.GetOrCreateMetricCachedAsync("destiny2.test." + objectiveHash); // Result is cached in-memory after first call
if(progress != last_progress) // Compress data by dropping not-changed values
    await client.SaveMetricAsync(id, progress);
    last_progress = progress;

Visualizing the data was next. I have been jealously eying Grafana dashboards for a long time, but never had the time to set something up. There was one instance a few years ago with Tracker3 where I stumbled around a bit with Netdata and Prometheus, but that didn’t really stay. Now I made some quick research on Grafana, and everything became clear.

Grafana is just a tool to visualize data stored elsewhere. It supports multiple implementations for that, and they each have slightly different use cases. I’m still not exactly sure what kind of aggregation optimizations are possible when viewing larger datasets at once, but I kinda just accepted that it doesn’t matter, especially when most of the time I’d be viewing the most recent data. What I also had to accept was that Grafana doesn’t automagically create the pretty dashboards for me and that I’d have to see some effort there. But not too much. Adding a graph is just a matter of writing a relatively simple SQL-query and slapping in the time macro to the SELECT-clause. And then the graph just appears. For visualizing the number of total kills with a weapon this was as complicated as it would get. For counters displaying the current value it likewas was just a matter of writing the SQL query with ORDER BY time DESC LIMIT 1.

And while I was at it, I also added a metric for the duration of the API calls. I also remembered that Grafana supports annotations, which could also be saved to Postgres. And the dashboard started to really look like something! Here there's one graph for "favourite" things and then one which just visualizes everything that is changing.


And why stop there? I also installed Telegraf for collecting system metrics such as CPU or RAM utilization or ping times. I went with the simplest approach of installing InfluxDB for this data, as there were some ready-made dashboards for this combination. More services, more numbers, more believable stack :S

* * *

That’s it. No fancy conclusions. See you next time. I’ve been using this system for only a week or two now. Maybe in the future I have some kind of deeper analysis to give. Maybe. And maybe I get to refine the account tracker a bit more, so that I could consider maybe (again, maybe) opensourcing it.

PS. These posts are probably not very helpful if you are trying to set up something like this yourself. Well, there's a reason. These are blog posts, not tutorials. I don't want to claim to know so much that I'd dare to create a tutorial. Although... some tutorials are very bad, I'm sure I could do better than those.

An update: a brave new world is coming?


Hmm. Starting to write this post is surprisingly hard. Despite the fact that I’ve already written one blog post earlier this year. Well, anyway… Now that I’m finally graduating (more about that on a later post!11), I’ve had some extra time to spend on productive things and re-evaluate my focus. And what an exciting turn of events has it been!

I’ve already spent one weekend on briefly investigating the state of serverless computing. This netted me an instance of Node-Red, but also MQTT, NATS, MongoDB and Redis (even though I already had one) to toy around a bit. All these are running on top of Docker Compose on a new, previously under-utilized incarnation of Usva. Docker Compose is actually a somewhat new tool for me. I’ve meant to look into it (or Kubernetes) for quite a while, but never got around to it. That is, until I made it a part of my thesis. Now this research already paid back – setting up all those services using Compose was a breeze!

Testing out Node-Red actually kicked off a larger process. It demonstrated that utilizing a range of ready-made tools isn’t actually all that bad and that there actually seems to be a lot of tools now that suit my way of doing things. And well, now that I’ve somewhat reinvented all of the software involved at least once, I can finally get to actually using existing ones properly :p I'll have to postpone Tracker once again while I discover this new brave world.

Node-Red offers most of all a nice platform to host short pieces of code. There is a web UI for coding and testing, and after that the code remains on the instance and keeps on executing. No need to worry about setting up routing or cronjobs or any of that! There are also some plugins for using stuff like databases and message queues with minimal work. While Node-Red doesn’t seem very scalable on anything remotely complicated (see a price tracker below), it did manage to verify the concept of serverless for me. And tinkering with it was actually quite fun and easy!



* * *

While a bit unrelated, as a part of this I also began experimenting on using events and message queues for communication on a larger scale than before. This shouldn’t come as a surprise though; in one iteration of Tracker I already began heading this way. And this kind of indirect messaging actually also plays a role with IoT devices – directly addressing them might not be possible due to how they have been networked. But why is this important? For two somewhat equally important reasons. The first is that for the longest time I’ve been in great pain due to how hard service discovery can be. But if services subscribe to queues instead of needing direct addressing, service discovery gets almost completely eliminated.

And second, it is because IoT is relevant now! I backed Meadow F7 on Kickstarter during the spring; it is a full .NET Standard 2.0 compatible embedded platform in Adafruit Feather form-factor, with a possible battery life of about up to two years. C# and low power in one small package!!! It even has integrated Wi-Fi and Bluetooth. I should be getting it in just a few weeks. Finally! But I guess more on that on a later post.

So, what does this all mean? With service discovery being taken care of by a message broker, and hosting solved by serverless and docker-compose (at least in theory), there comes an unexpected ability to realistically develop microservices and other lighter pieces of code. And possessing some easily programmable embedded devices opens up a way for a whole new level of interactions. Though first I have to replace Node-Red with a proper serverless solution… Well, not have to, but...

I already have plenty of development ideas, but I guess they too will have to wait for another post!

Trying something new

Let's try something new. Or actually, something old.

What I mean by this, is that it is once again time to do some self-reflection. But this time I'd like to do it in finnish, as that is my native language and the language of most of my thought process. It should thereby be a lot easier to unload my thoughts with great precision as there is no translation overhead nor thinking in two languages simultaneously. Although, as I might have previously mentioned, another language helps to look at the thing more objectively. Anyway, it is low(er) effort, just what I need now. And there is the potential for feeling good of the end result.

This type of deeper reflection poses a great question, though. Is this something I dare to publish? Things are eternal in the internet; no backsies. And yet: once published, don't I just hope this gets read?

This was written at 2 am. Try to enjoy.

* * *

Minä olen ihan helvetin kyllästynyt siihen, että en nykyään jaksa oikein tehdä mitään. Öisin ei uni tule ja päivällä väsyttä. Mieli olisi kuitenkin täynnä kaikkea siistiä ja hienoa mitä haluttaisi tehdä, mutta kaikki jää kuitenkin toteuttamatta. Joko mielenkiinto vain menee saman tien kun yrittää toteuttaa, tai sitten ei vaan fyysisesti jaksa.

Oli kyse sitten TV:n katsomisesta, tarinallisten pelien pelaamisesta, liikunnasta tai luovasta toiminnasta; lopputulos on sama. Ei mitään. Parempiakin päiviä toki on, mutta ne tuntuvat olevan aina vain harvinaisempia ja harvinaisempia. Olen kärsinyt tästä ongelmasta nyt varmaankin kymmenkunta vuotta, mutta vasta viime vuosina se on edennyt sen verran pahaksi, että havahduin sen oikeasti huomaamaan. Ja hakemaan apua. Ikävä kyllä tämä on sellaisia asioita, joissa mitään pikaratkaisua ei ole - ainakaan pidemmällä aikavälillä.

Tai sit olen vaan ihan helvetin laiska.

Kirjoitan tästä asiasta, koska kyllä näistäkin asioista pitäisi voida puhua. Edes jollekin. Toisaalta yritän tässä hieman taas vaihtaa lähestymistapaa ja saada edes jotain aikaan. Minun on nimittäin pitänyt jo pitkään saada käyntiin - kaiken muun lisäksi - oma varsin vapaamuotoinen vlogi eli videopäiväkirja. Mutta enpä ole siihenkään pystynyt, ja se harmittaa. Taustoja vlogin perustamishalulle on monia. Erinomainen aihe vaikkapa aloitusjaksoksi, siis. Mutta ei jaksa. Melkoinen muna-kana-ongelma. Tämä teksti yrittää osaltaan olla eräänlainen kiertotie asialle.

* * *

Vlogi, miksi? Syitä en osaa asettaa tärkeysjärjestykseenkään, mutta tässä tälläinen puoliksi auki pureskeltu pohdinta:

Innostuin pari vuotta sitten "travel film" tyyppisestä Youtube-sisällöstä. Verrattain lyhyitä kiiltokuvamaisia tarinoita matkailusta. Kantavana teemana ennen kaikkea matkan hauskat kokemukset ja se mitä tuntemuksia matka herätti videon tekijässä siinä hetkessä. Näille videoille ehkä hieman omainen carpe diem -asenne oli minulle jollain tavalla pysäyttävä. Kuten aimmin Life is Strange -taustaisesti avauduin, hetkestä nauttiminen kun on ollut minulle aina varsin haastavaa. Ehkä voisin jotenkin kyetä emuloimaan tuota tunnetta jos tekisin itsekin vastaavia videoita? Lopulta ehkä oppiakin nauttimaan?

Toisaalta videoiden tekeminen olisi varsin erinomainen tilaisuus harjoitella esiintymistä, tai vielä tärkeämmin ennen kaikkea itsensä sanallista ja eleellistä ilmaisua. Ne, jotka minut tuntevat ovat luultavasti huomanneet sen, kuinka sekään ei ole minulle kovin helppoa. (Kommentoikaa toki, vaikka anonyymisti. Nettisivuilta löytyy yhteydenottolomake, joka ainakin luultavasti toimii. Tai jos on muutakin.) Kun kameralle koittaisi selittää fiiliksiään, niin niitä olisi pakko miettiä ja muodostaa kaikesta tapahtunesta mielipide, parhaassa tapauksessa kesken tapahtuman. Tämän jälkeen tapahtumaa voisi ihan eri tavalla havainnoidakin ja oikeasti yrittää uppotua siihen siinä hetkessä. Kokonaisvaltaisempi kokeminen rikastuttaa elämää ja tuottaa iloa ihan eri tavalla. Tai näin on ainakin hypoteesi.

Videoiden tekeminen on toki myös aivan oma maailmansa. Se on jotain erilaista kuin se ohjelmoinnin ja videopelien maailma missä yleensä kuljen. Mutta kuitenkin jollain tavalla tuttua. Tekeminen on toki laaja termi. Siihen kuuluu ainakin editointia, kuvaamista ja jopa suunnittelua. Kaikki näistä asioita, joissa voisi myös kehittyä ja oppia paljon uutta. Editointi on luovaa tekemistä, joskin ikävän työlästä. Kuvaaminen puolestaan - ainakin kuten itse näen asian - on erityisen hieno asia. Sillä kuten selostaminen, niin myös kuvaaminen pakottaa tarkastelemaan senhetkistä tilannetta eri kulmista (sanan kaikissa merkityksissä :p) ja ehkä arvostamaan tapahtuvaa. Jopa verrattain yksinkertaisetkin asiat voivat tuottaa suurta iloa kun niihin keskittyy. Viimeiseksi mainitsin suunnitelmallisuuden. Siinä minulla olisi kehitettävää, joskin hieman eri tavalla kuin ensiksi arvaa. En ole millään tavalla spontaani ja olen mukavuusalueellani kun saan miettiä tulevaa rauhassa. Mietin kaikkea kuitenkin varsin käytännönläheisesti, enkä tunteella. Kuinka ottaa se mukaan suunnitteluun? Ajatella, millaisia tuntemuksia voisin asioista kokea. Kun asetelmaa vie vielä pidemmälle se kääntyy, ja ollaan yllättävän syvien kysymysten äärellä. Mitä tehdä, jotta tuntisin haluamiani tunteita. Mistä oikeasti pidän?  Mikä minut tekee onnelliseksi?

Vlogaaminen on siis ehkä paitsi uusi laaja harrastusmahdollisuus, niin myös suurta potentiaalia omaava työkalu. Mutta se on myös muuta. Uskallan sanoa, että eräs ihmisen perustarve on tulla huomatuksi. Tämä blogi - ja jatkossa ehkä myös vlogi - ovat minun tapani huutaa tyhjyyteen, että minä olen olemassa. Minulla on myös vahva usko siitä, että voisin oikeasti tehdä laadukasta sisältöä, josta on ihmisille hyötyä. Joko viihteenä, tai asiasisältönä. Mutta olenko valmis tekemään työtä sen eteen, että pääsen sinne asti? Ja mikä riittää? Ehkä on vain parempi olla miettimättä asiaa.

* * *

Korostettakoon tässä suuren avautumisen vanavedessä nimittäin vielä tuota aiempaa lyhyttä tekstiä täydellisyyden tavoittelusta. Se on aivan helvetin vaikeaa, mutta jostain syystä siihen vain silti pyrkii. Ja kun sitä ei vaan voi saavuttaa, ja on niin kovin helppo menettää motivaatio. Oli kyse sitten ohjelmoinnista tai videoiden tekemisestä, tai jostain muusta. Miksi yrittää jotain, kun tietää, että sen voisi tehdä aina vielä vähän paremmin. Miksi julkaista mitään työn alle saatua, kun tiedossa olisi vielä loputon kasa parannuksia. Miten tyytyä hyvään? Tai jopa keskinkertaiseen? Miten uskotella itselleen, että keskinkertainenkin voi kelvata? Miten hyväksyä se, että aina on kriitikoita?

Kuinka heittäytyä ja antaa ei-täydellisille, mutta realistisille ja aidoille tuotoksilleen mahdollisuus onnistua sellaisina kuin ne ovat? Kuinka tuntea ylpeyttä niistä ja niiden tekemisestä? Mikä on lopulta arvokasta?