Presenting USG:Rerolled


Despite the hardships, I’ve been able to at least occasionally dedicate some time for the continued development of this year’s Finnish Game Jam / Global Game Jam game. Worked alone this time, and still made a nice game. With TypeScript :o
Now, with the continued effort, it’s starting to look pretty good! The game itself is a mix of Dicey Dungeons, Slay the Spire and maybe even FTL - Faster than Light. It’s about dice-rolling and loot in a turn-based combat, from encounter to encounter. Originally it was to have gameplay that would have emulated Cultist Simulator in at least some questionable way, and hence got the name SDC: Slay Dicey Cultists. But then it occurred to me that this could be an excellent chance of carrying on the torch from the project that is my unicorn, The Peli – or as more recently known, USG. So let’s give it up for USG:Rerolled!
The main focus of the game is the equipment, and the many effects the pieces can have. This made me to choose to implement the effects with straight up code, instead of trying to codify all the effects in some kind of standardized structural form. It's been a good choice for productivity. But I dread the day I need to make some kind of breaking chance. I also did some snooping, and found that this is how other games have chosen to approach this problem, too. As the game will have (and already has) quite an assortment of equipment, it only made sense to also create and editor for the items. And the editor. Well… I guess I’ve spent as much time on it as the game, or something `:D
The editor has some standard fields for the most basic attributes of the equipment. It also has an integrated code editor with syntax highlighting and auto-completion. This is achieved by embedding the very same text editor component (add diff viewer) that powers Visual Studio Code, the Monaco Editor. The changes made via the editor are versioned separately from the rest of the game, and the editor has an ASP.NET Core backend for implementing the filesystem and code generation functionalities. Upon saving the data, the game itself is automatically reloaded with the new equipment data. I’m rather pleased of the setup. I’m planning on extending the editor for also creating random encounters for the game to balance out all the combat. Then there’s always some quality-of-life improvements to be done… But overall, it’s already in a surprisingly good shape! The editor even has a graph of the saved versions and their relations...
The latest addition to the game was an encounter map, which brings some structure to the game. Compared to the work already put to the game, this was a relatively small addition, but it did take a few days to get the SVG-based drawing and random generation to work in a satisfactory way. Evening out the randomness would be the next step. Speaking of next steps, I’m kinda testing out if I could bring ships with limited hardpoints to the mix, without everything getting too confusing. Then I’d be quite close to the dream that is USG. See you next time!




Plans


Everything I planned for. Crumbling before my very eyes. And I’m not even talking about the pandemic.

I kinda thought that now that I got my studies finished, I’d get a chance to focus on what is important and make time to do things. But that didn’t quite plan out, as there surfaced some things that’ll be requiring my attention. And not just for a little while, but for several years :( And some things that directly contradict everything.

Though it (mostly, not completely) depends on whether I think about those things or not (: It would be wise to give some thought to one of them, but thinking about it doesn’t really help it at the moment… Was this vague enough? :s

But, as they say: it is important to try to keep sense of normality in a time of crisis. Remains to be seen how well that'll pan out.

Nuget package creation addentum

A short and informational blog text for once!

In preparation to some big plans™ I changed the way I generate my Nuget packages. The old way where all the metadata was specified in the .nuspec file was otherwise good, but it didn't specify version metadata for the actual DLL.

I've now changed the process so that version is specified in the .csproj file, via the VersionPrefix element. I've also defined a VersionSuffix element with a value of dev. This way when the project is built "locally" the assembly version ProductInfo field reads for example 0.1.0-dev. Whereas when the Nuget package is built I pass the option -Properties VersionSuffix=, resulting in 0.1.0. And as a side effect of moving the version element away from nuspec, I also had to also specify -Version $version, where $version is a PowerShell variable parsed from the csproj file and include a placeholder version in the nuspec file.

While slighly complex, this now allows me to fetch the version programmatically and know whether the library is an "official" release version via Nuget, or a locally compiled version with unversioned changes. I've also considered to just inject the version together with the prefix as a build step, but for now I'll try my luck with this thing that needs the assembly to reside on disk:

PS: there's a ton of unresolved issues with the Nuget CLI program, many of them several years old. When investigating the above, I also noticed that my Nuget packages don't include external libraries as dependencies even when using the option -IncludeReferencedProjects. It is a reported bug, but hasn't been fixed. One alternative might be to use dotnet pack instead of nuget pack, but I'm not sure what other changes that would entail.

InfluxDB broke, again

It is not a long time ago that I blogged about how I experienced my first fault with InfluxDB. And now it has happened again. This time a bit different, though. And much worse.

It all started with the familiar NATS alert, so I tried rebuilding the indexes again with buildtsi and thought that would be it. I was wrong.

This time there was an invalid CRC on one data block, which caused segfaults(lol) within the InfluxDB executable. Speaking of error checking... the verify command in influx_inspect has a bug. It erroneusly reports a block as healthy, because the right counter is never incremented. But anyway... The block is faulty. What now? Where is the option to fix it, or remove the block? Removing the offending file manually doesn't help.

So in the end this made me abandon the data, and start from scratch :'( And for some reason docker-compose kept doing some weird caching with the data (even when it was a filesystem mount), even after downing and removing the old container, and emptying the old data directory, and it was an effort to get everything working again...

I should probably consider upgrading the hardware. If that is the real problem. For the lulz I also asked for a quote of InfluxDB enterprise. If I cluster it, then one node can fail and it recovers, right? Right?? But I don't expect the quote to be realistic. And even if it was, it's probably still too much. One alternative might be Apache Druid. But it, too, seems a bit too young a product.

I graduated

1.5 months ago. I guess it should feel like something? Clearly not, as I've been in no hurry to blog about it in more depth. And it only took 8.5 years :d

That is also probably the reason. I've been working almost full-time for about 4.5 years now, and my studies were already somewhat complete even when starting to work. During that time they just progressed slowly and sluggishly, but relatively surely. The two larger challenges were passing swedish and then of course the thesis. Everything has just been one large marathon, with the goal of just reaching the finish line.

And now I did. Sure, it's an acomplishment, but is it really anything that special? The goal was always there and reaching it didn't really come as a surprise; there was no great and overwhelming feeling reaching it. The race just ended, and there was little more to it than that.

* * *

But actually, there is a bit more. In order to get to that let's talk a bit about the thesis itself. Like my Bachellor's, Master's was about something that I was researching at the time. For many years I've been interested in backend development. Now everything's finally giving so much return of interest functionality-wise that it's worth to invest in deployment. What good is it to be able to write backends with only little effort when making them available and easily accessible is uncharted territory.

So for my thesis I wanted to take a look into the process that went to creating backends in a way that made them easier to deploy. I didn't want to chomp on a piece too big, so I purposefully didn't seek an automated deployment pipeline. I knew better than that. But the real friends are the ones we made along the way, so buckle up ;)

I had previously written a simple restaurant menu parser for my Nokia e51 in PHP. When the page was requested, the server fetched the menus from preset restaurants and rendered a slimmed down HTML-only page with just the menu contents. This was very advantageous compared to opening the site(s) of the restaurants manually and waiting for needless data transfer over a slow mobile connection, and then the slow rendering of complex content and scripts on the low-power mobile device.

Anyway. When the parser was in need of an upgrade I ported it to this new thing called .NET Core and hosted it on a now-retired iteration of usva. Then at one point I also dockerized it and made it run on Azure. This was done because that was the only application running on usva, and now I could do other things with the server (for example turn it off :p). But anyway. That was the first step on the journey of improved hosting.

Several years passed till the thesis. I needed an application to use as the base for it, and after making a list of possibilities the menu parser was kinda the only reasonable choice. Other alternatives would have been far too work-intensive in their implemenation itself. I needed a tried concept so that I could focus on just researching deployment and hosting (and common design things). But of course I couldn't not upgrade the application, so for the thesis I took the menu parser logic of the old system, and wrote a whole new site, including a history database for the foods and even some user-specific stuff as an example. I also added an admin API and some other features such as centralized structured logging.

While the academic goal was to present some design guidelines for containerized applications, the pratical goal was to create the basic building blocks that could be later used for more involved DevOps concepts, namely build and deployment automation. At the start I was not sure how these would end up looking, or even if I'd ever get to see them.

But ultimately I ended up with something I can be quite satisfied with! I could use the Dockerfile to automate building the application, and then further use Docker compose to define the runtime: namely the exposed ports, configuration and the whole set of services (application, updater-parser, database and log storage). What still remains to be improved is how the database is used, as there remained some manual steps for database migrations. But for everything else the process was automated quite far.

The goal was to simplify hosting and deployment, and it really did get a lot simpler. Just one command to build the app, another to copy it over to the target server, and one final command to update the running instances. But what is even more imporant is that it is the same three commands no matter the app. This opens the way to efficiently build automation in the future by replacing just those commands. But it also makes the manual work easier.

* * *

The work done on the thesis is just a part of a larger continuum about backend development. While the thesis (and graduation in general) is a great step, the work never ends. And I'm already busy with the "next" steps - as indicated by the surprisingly many blog posts as of late. There is always so much more work to be done to get better. This same train of thought extends to my other hobbies, too. In the shadow of the continuum, there just doesn't seem to be any time to feel the pride and acomplishment of the past.

At least alerting works...

I've also done some work in setting up infrastructure monitoring. There's still work to do - and do better - but at least I have one alert. And it works.

I think I might have just been thinking of doing something more leisurely, but Grafana sent me a Telegram message that there was something wrong with NATS response times. I open the link and see that there is no data, meaning that the instance is probably down. But there's also no any other data. Fuck. Everything going up in flames this soon?

But wait a sec. There is data, momentarily. Then it disappears again. I bring up logs for InfluxDB and and see an error "panic: keys must be added in sorted order". I spend quite a while trying to figure out what exactly is wrong and how to proceed, almost giving up. It seems that lot of the tooling for fixing and managing the files has been removed or made internal-only. But then I find an up-to-date guide for rebuilding the index and decide to try it.

Because my installation is dockerized, and there seems to be some issues with the rebuild command, I had to chown the data directory to "some user", and then run the repair command, and then chown the files back. And yay! It works again. For reference, the docker command I used: docker run --rm --user 1000 -v /path/to/influxdb-data/:/data influxdb:1.7.9 influx_inspect buildtsi -v -datadir /data/data -waldir /data/wal

At least least it works again. But just as I though everything was going nicely... Maybe the problem is the server itself? It served as my desktop earlier, but I moved away from it due to constant crashes with GTA V, and much more rare crashes other times. Maybe I have to invest in some proper hardware :o We'll see. Maybe it'll work again without issue for a long time. Pls :s

Testing the stack with CircuitPython


Like I stated earlier, one of the reasons why I’ve now been so much about improving my stack is the fact that I backed Meadow F7 a while back. I kinda want to maximize my productivity with it, so I’ve been doing what I can beforehand (and while the mood lasts).
I already talked how this work included setting up Grafana and other data collection facilitations. I also ‘teased’ about using NATS for registering some long-running ad-hoc jobs. I’ve been calling the NATS-based thing Sumu. More about it later. But anyway, now these were put to a somewhat unexpected test when I ordered bunch of preparatory stuff from Adafruit and got a Circuit Playground Express as a freebie.
I’m trying to keep this post short, so I’ll just state that it can’t really be any simpler to get some samples running with it. Basic documentation is very thorough and there’s a bunch of sample code available, and with a bunch of sensors included in the board itself. As kind of an embedded Hello World I moved to the embedded temperature sensor after blinking a led and playing some drum samples.
The code on the device reads the temperature once a second, and prints the averaged value every 10 seconds over the serial console. On PC I have a LinqPad script registering itself to Sumu (with a health check), reading the serial console, and pushing data to Postgres while also serving the current (or next) temperature value via Sumu to browsers (via Node-Red). There’s even error handling and retrying in case the serial console gets disconnected for some reason (for example the device is unplugged). All this in 180 lines (with majority of it being serial console stuff :p) and maybe an hour or two! Feeling good about things!

Let’s just hope that this is just the beginning, and not the peak :s

How to create a Nuget package


Like mentioned in a previous blog post, I’ve been looking into creating and hosting Nuget packages (the package management system for .NET). While there still is a lot of things I don’t yet have experience with, I feel confident enough to share the basics of it. But mostly because now I have the process written down for me, myself and I

Start by creating a new class library with dotnet new classlib and implement whatever the library should do. At least for simpler libraries there really isn’t any Nuget-specific considerations. You can also start by installing the nuget-binary somewhere in your PATH.

After the implementation is done, it is time to add the Nuget-specific metadata. This information can be specified in two ways. Either by independent nuspec-files, or via the csproj-file on the actual project. I tried to use just the project files, but ultimately this proved to be a problem due to limitations of the Nuget CLI. If a project references another, and that other project doesn’t have a nuspec-file, then it isn’t added as a dependency, but instead the binary is directly included in the first package. So, in the end I though it was easier to just define all the metadata in a single file, the nuspec-file. Here is an example:



This file should be named similarly to the package id and project file. So in this case Dea.WebApi.AspNetCore.nuspec and Dea.WebApi.AspNetCore.csproj. To generate the package containing the binary of the project and all dependencies correctly marked, use a command in form of nuget pack -Build -IncludeReferencedProjects Dea.WebApi.AspNetCore.csproj. Even though the argument is the csproj-file, the nuspec-file is also automatically used, as it is similarly named. This also applies to the linked projects and their nuspec-files. If you want to make debugging easier, you can also add -Symbols flag to the command.

Pushing this package to a repository is easy if you already have one set up (like described in the previous post): nuget push -Source DeaAzure -ApiKey az Dea.WebApi.AspNetCore.0.2.1.symbols.nupkg

I was going to rant here about how Nuget-package and Azure Artifactory break the debugging experience as there really isn’t an easy way to get the symbols working. But it turns out that everything supports them after all. It was just a matter of including them, and being sure to change the format from Portable to Full. Changing the PDB-format is usually one of the first steps I make when creating a new project, but for some reason forgot to do it this time. Now everything just works :) Though I would have preferred to have the symbols hosted separately so that not everyone who has access to the packages has access to the symbols, too. Not that it really matters, as I probably have to use Debug-builds anyway, and it is so easy to decompile the files back to source. And maybe some packages even have the source code available anyway...

But this is actually one great unknown. Is there a way to nicely host (and consume!) different flavors of packages? One with an optimized release-build, and another one in debug mode with all the symbols included. A quick search didn’t reveal a simple solution for this. Maybe we’ll never know.

Edit: also see the addentum about versioning.

Venting about Android development and push notifications


As part of the ongoing effort to modernize my technology stack, the question on notifications constantly surfaces. Today the de facto solution for them on Android is to use Firebase Cloud Messaging (FCM). As I’m trying to use C# for everything I’ll be using Xamarin.Android. Microsoft has relatively good documentation on FCM, so I won’t be repeating it. Instead, this post is primarily about venting about Android development, even after Xamarin makes slightly more bearable. This post it not to be taken as general indication of, well, anything.

* * *

So. I create a new basic Android application. I try to register a new project in Firebase and add my server as a client application, but Google’s documentation is out-of-sync, so I don’t find what I’m looking for. I finally manage so stumble to the right page and can get to work.

I create a scaffolding for my notifications server and try to send the device token to it from the mobile application. It fails. The documentation uses a deprecated API, but that’s not it. Okay. I’ll just look at the logs. But hah, fuck that. I didn’t start the application via the debugger, so I don’t get to filter the logs by application. I have to use the command-line version of logcat, and instead I’m flooded with messages from AfterImageCompositionService and touch debugger and friends. I can’t even filter the logs sensibly by my log tags, because they were changed years go to be limited to a something like 23 characters. But I finally find the error: I can’t use plaintext HTTP anymore on Android by default. Not even on local network! The “correct” way would seem to define some kind of complex security policy, so I just go for the alternative of slapping usesCleartextTraffic="true" to the manifest.

And things work, yay. I can send notifications, and they appear on the phone without even having to have the application running! But then, as stated in the documentation, I try to send them when the application is running. And they won’t work. So I have to manually construct the notification (though, as stated in documentation), and hope that it matches is appearance to the “built-in” one.

And then the documentation starts to fall apart. Starting with just minor things, like not stating that icons should have an alpha channel, or otherwise they appear as blank squares (docs were written for older Android). Then larger things, like when with the otherwise very clear testing instructions there are suddenly no instructions to test those foreground notifications. Well, that is because they won’t work if just following the documentation. And by now I’ve already forgotten what I had to do to somewhat fix them.

And testing out the code creating the notification needs few more iterations than I’d be comfortable. Building, deploying and running the application takes actually quite a bit of time, even when I have the relevant acceleration settings enabled. That is, using shared runtime and only deploying changed modules of the application. It also bugs me that the shortcut for the application keeps occasionally disappearing from my launcher. Then I finally look into this more, and find out that Xamarin, at least on Visual Studio 2019, doesn’t honour those settings, and just uninstalls and install the application every fucking time.

But oh, it gets better. As I try to test both background and foreground notifications, I have to occasionally close the app so that it is no longer running. I do this by pressing the “Close all” button on my Samsung phone. And the notifications won’t work. Little did I know that there was some OEM-fuckery at work. Apparently closing the application this way is the same as killing it (at least as far as OS state-keeping is concerned). This sets a special flag indicating that the app was killed, and the OS-level Firebase service won’t deliver messages to applications that have been killed. vittumitäpaskaa

After I’ve regained my composure I move to the next feature on the list. Showing an image attached to the notification. The documentation states that this is as simple as setting the image URL in the notification payload to a valid HTTPS image. Firebase console even shows a preview of the image!

The documentation was wrong. I spend one full day trying to get the image to work, and it still doesn’t work. I tried doing it in so many different ways, via both generic and Android-specific settings, via the old and the new Firebase API, via managed code and by hand, via my server or directly, or via the Firebase console. Nothing works. I’m starting to suspect that might be Xamarin’s fault at this point. Couldn’t really find any documentation confirming or denying it. Couldn’t even find any documentation how the Firebase library is supposed to work at application-level on plain Android. Is it really operating-system level, or is it just some code included by the library to the application, and then it really is the application showing the notification even when it wasn’t running? And maybe with Xamarin no-one bothered to implement the code for showing the image. Maybe I’ll never know. I could create a new Android Java application, but I really can’t be bothered to do it… Thanks to C#, I’ve started to hate the clunky syntax of Java and other tooling more and more.

So many unnecessary hardships for trying to get even the basic application working… Next is the fun of trying to keep the device’s Firebase token in sync with the server. Apparently, it can occasionally change, and must be sent to the server again. But what if the network doesn’t work right then? I’ll have to manually build to scaffolding to manually schedule background work and keep it retrying until it works.

And then finally I get to implementing more server-side stuff, like removing the device token from the server should the application get uninstalled. Or delivery notifications! Those would be where things start to really get useful! Maybe I can also start to implement notification storage to the device, so that I can filter notifications by their read-status, and not show notifications that were resent due to network errors. And then maybe implement some kind of caching and stuff, and data messages. Then I’ll probably have to implement downloading and showing attached images myself, but on the other hand then it should finally start working. And then have a nice application that has a GUI list showing both all the past notifications, but also those that are unread. And then the general server-lead notification system can be expanded to span other devices, too. Like lightbulbs! And the add some service health notifications from other services I’ve been planning or implementing. And general ones too, like package or price tracking.

* * *

Package tracking was actually the reason I finally looked into this topic. Matkahuolto didn’t have notifications when new info appeared in tracking, like some other companies do. So, I made them myself. And despite all those hardships, I actually got the notifications and tracking working quite OK. Then the funniest thing happened. Matkahuolto updated their site like an hour after I got everything working :’D Had to make a quick fix to the script. Luckily things got easier, as I didn’t have to parse HTML, and could get JSON instead. Good thing I included error handling to the script. If the crawl failed 10 times in a row it would send an error notification. And now I have the LG OLED65E9 TV I ordered :3 Perhaps more on the HDMI 2.1 4k120 fun of the TV later!

Like I mentioned earlier, I’m looking to expand to using serverless, so that I won’t have to use more error-prone cronjobs and such. I’ve also never ran C# code via cron, and everything has been in Python still. So serverless would enable me to write these in C#. But there seems to be a partial solution for this problem in form on LinqPad. I’ve been longing for a serverless platform that made code as easy to execute as LinqPad. So why not just use it? I even have a Windows PC constantly running. Of course there is the problem of scripts still maybe failing randomly and without any automatic restart, but there exists a partial solution in form of monitoring. I’ve been building some general purpose service and service discovery code on top of NATS, and maybe that solution could be used for that, too. A LinqPad script could register itself to that service discovery / health checking system, and then I’d get notifications if the script failed in some way! Maybe more of that, too, later. Later as always. Well, got to have plans? An endless list of things to do, so that I can never feel satisfied from having done everything.

My own private Nuget feed (featuring packages for simpler web APIs): the brave new world might really be a thing?


Wow, what an unexpectedly productive end-of-the-year this is becoming! A blog post, again! Let’s just hope that this is the new normal, and not just something preceding a less-than-stellar episode.

* * *

Like I’ve begun to outline in the earlier posts, I’ve been rethinking my “digital strategy”. It is a pretty big thing as a whole, and I’m not sure if I even can sufficiently outline all the aspects both leading to it and executing it. But I shall continue on trying to do it. This time I’m focusing on sharing code. Jump to the final section if you want to skip the history lesson and get to the matter at hand.

As some of you might know, I started programming at a very early time, relatively speaking. I was self-taught, and that time PHP was starting to be the hot stuff, whereas JavaScript was just barely supported by widely-used browsers. No-one even imagined DevOps because they were too busy implementing SQL-injection holes to their server-side applications which they manually FTP’d over, or developed directly in production, also known as “the server”. And I guess I was one of those guys.

Then I found Python and game programming. And as everyone should know, there really isn’t that much code sharing in game programming. At least if you are trying to explore things and produce things as fast as possible without too much planning (not that this doesn't to apply to everything else, too). Python was also good for writing some smaller one-off utilities, snippets and scripts for god-knows-what. It could also be adopted to replacing some PHP-scripts. But then I found that the concept of application servers exists, opening up the way for a completely new breed of web applications (with some persistance without a database, also EventStream and WebSockets). There was so much to explore, again.

I was satisfied with Python for a very long time. Then I bumped to Unity3D, where the preferred language is C#. A language much faster than Python, at parts due to strong typing. I wasn’t immediately a fan: it took a few years to affect my other programming projects. After that I finally acknowledged all these benefits, and for about 3-4 years now, I’ve been trying to (and eventually succeeding to) shift from writing all my programs, scripts and snippets in Python, to writing them in C#. This was also catalyzed by that fact that I worked on a multi-year C# project at work. And now I can’t live without strong typing and all the safety it brings. And auto-completion! The largest, and almost only, obstacle was related to the ease of writing smaller programs and scripts. With Python I could just create a new text file, write a few lines, and execute it. Whereas with C# this meant starting up Visual Studio, creating a whole new project, writing the code, compiling it, finding the executable, and then finally executing it. The effort just wasn’t justified for snippets and small programs. But for something larger, like reinventing Tracker once again, or for extract-transform-load tools, it began to make sense. (And games of course, but at this time they didn’t really happen anymore, as I was too focused on server-side.)

Then this all changed almost overnight when I found LinqPad. It provided a platform to write and execute fragments of C# code, without the need to even write them to disk (a bit like with Python’s REPL). This obviously obliterated the obstacle (khihi) I was having, and opened a way to start writing even those tiniest of snippets in C#, while also allowing me to use the “My Extensions” feature of LinqPad together with the default-imported packages to share some code between all these programs. And of course I could just individually reference a DLL in each script.

* * *

Parallel to this is also maybe the fact that I was now starting to get so experienced, that I had already “tried everything” and what was left was to enjoy all this knowledge, and apply it to solving problems :o

With Python it never really got to a point where I “had” to have larger amounts of shared code or needed to package it some way. There were few lesser instances of it, but I was able to solve it in other ways. But now with C# things have continued to evolve, and the need for sharing code between projects, programs and snippets is becoming a reality.

For why this is only happening now, there were multiple aspects at play. As I said, almost everything was always bleeding-edge, and there was no need to version things. In part this was due to the fact that there really wasn’t any actively maintained code actively running. Everything was constant rediscovery, and as such there really wasn’t any code to share. Rather, everything was constantly forked and improved upon without thinking about reuse too much. Good for velocity. Also, as programming really happened on only one computer, code could be shared by just linking other files. Code running elsewhere was either relatively rare, or in maintenance-only mode.

With age and experience this is now changing. Or, well… That, but also the fact that I’ve gotten more interested in smaller and more approachable online-services - services which have the ability to create immediate value. A bit like in the old times! In order to make those services dependable, a more disciplined approach is needed. While Tracker will always have a special place in my heart, there seems to be better things to spend my time on. At the same time another strong reason is that I find myself creating and executing code on multiple computers and platforms, making filesystem-based approaches unsustainable.

* * *

All this has finally pushed things to a point where I should really be looking into building some kind of infrastructure to share code between all these instances. For a time, I leveraged on the ability of LinqPad to reference custom DLLs and was quite happy building per-project utilities with it. But now my development efforts seem to be focused all over one smaller slice of the spectrum, with multiple opportunities to unify code. This is especially important considering the fact that I’m also looking to make it more viable to both create and execute code on multiple environments, be it my laptop, work computer or a server. In my master’s thesis researched on using Docker Compose to simplify running code on servers. As indicated in the earlier blog post, I’m now trying to continue on that path and make that task even simpler by utilizing serverless computing. With both Docker and serverless it becomes somewhat impossible to easily and reliably consume private dependencies from just the filesystem.

Instead, these dependencies should be in some kind of centralized semi-public location. And a private Nuget-repository is just the thing! As outlined in these new blog posts, my idea is to build some kind of small, easily consumable private ecosystem of core features and libraries so that I can easily reference these shared features and not copy-paste code and files all over the place. I started the work of building this library a tiny tiny while ago, but quickly decided that I should go in all the way. I’m not saying it will be easy, but it should certainly be enlightening.

The first set of projects to make it to the repository are those related to building web APIs. While I’ve on many occasions planned on using tools like gRPC, sometimes something simpler just is the way to go. I’m not sure how thoroughly I’ve shared my hatred for REST, but I like something even simpler if going that direction. This set of projects is just that. A common way to call HTTP APIs the way I want. Ultimately this comes down to one single thing: wrapping application exceptions behind a single HTTP status code and handling them from there. How else am I supposed to know if I’m calling the wrong fcking endpoint, or if the specific entity doesn’t happen to exist??! Although, as we all know, writing HTTP libraries is a path to suffering. But maybe one with a narrow focus can succeed?

Anyway, maybe my impostor syndrome gets something out of this. Or not. It seems that everyone else is releasing public packages and open source code. But I am not. But maybe, just maybe, this will eventually change that thing.

* * *

The task of actually setting up a Nuget repository ended up being easier than I’d like to think. As the first reaction I thought about using some kind of self-hosted server for it, as I had had much success with setting up servers for other pieces of infrastructure lately. Unfortunately, the official Nuget.Server-package one was a bit strange, and I wasn’t sure if it even ran without IIS. I also didn’t want to spend too much time searching for alternative open-source implementations. So, I decided to try the cloud this once! Microsoft has a free 2 GB version of Azure Artifacts for hosting not only Nuget-packages, but also packages for Node.js and Python. I decided to test it, and then migrate to a self-hosted one should I need to save on hosting costs in the future.

As I wanted a private Nuget feed, I first created a private project (or team?), and then set up Personal Access Tokens so that I could automatically fetch and push packages with my credentials. To actually use the feed with the Nuget command-line tool, I ended up adjusting my local per-user NuGet.Config file. I think that I also had to first install the Azure Artifacts credential prodiver using a PowerShell script. But anyway. This way I should now be able to use the packages in any new project, without having to explicitly reference the repository and access tokens: nuget sources add -Name DeaAzure -Source "https://pkgs.dev.azure.com/_snip_/index.json" -Username anything -Password PAT_HERE. I had to manually add to feed to LinqPad, but that was just as easy: F4 -> Add NuGet… -> Settings.

For actually creating the packages I just used Google and followed the first (or so) result. This means I wrote a nuspec-files for the target projects, built them manually and then used nuget pack and nuget push. I also wrote a script automating this. But, after I later tried doing the same with a newer version of the nuget CLI I got some warnings. It seems that it is now possible to specify the package details directly via csproj-files. Apparently this should also simplify things a bit: now I manually add the generated DLL to the package, the newer way might do it automatically. It did feel a bit strange to manually how to know how to include the DLLs to the correct directory it the nupkg. I’ll try to blog about the updated version of this process in more detail later (edit: here). That is, after I get to implementing it.

While I was relatively hopeful about this whole thing above (though the last paragraph kinda changed that trend), I’m actually not sure what kind of things I’ll have to do when I want to use the feed with dockerized builds. I might have to bundle the config to each repository. Remains to be seen. It will also be interesting to see how my workflow adapts to both using, but also for releasing these new Nuget-packages. How about debugging and debug symbols? Will there be build automation and tests? How is versioning and version compatibility handled? Automatic updates and update notifications? Telemetry? For a moment everything seemed like it was too easy, I’m glad it didn’t actually end that way! /s

Edit: it seems that at least the debug symbols of my own packages work directly with LINQPad when using Azure Artifacts. For some reason I did have to manually remove the older versions of some of my packages from LINQPad's cache from the disk in order to use the symbols I included in the newer packages of mine (the older ones didn't include symbols, yet the newer one had a different version number). But after that they worked :)