Wow, what an
unexpectedly productive end-of-the-year this is becoming! A blog post, again!
Let’s just hope that this is the new normal, and not just something preceding a
less-than-stellar episode.
* *
*
Like I’ve begun to outline in the earlier
posts, I’ve been rethinking my “digital strategy”. It is a pretty big thing as
a whole, and I’m not sure if I even can sufficiently outline all the aspects
both leading to it and executing it. But I shall continue on trying to do it.
This time I’m focusing on sharing code. Jump to the final section if you want
to skip the history lesson and get to the matter at hand.
As some of you might know, I started
programming at a very early time, relatively speaking. I was self-taught, and
that time PHP was starting to be the hot stuff, whereas JavaScript was just
barely supported by widely-used browsers. No-one even imagined DevOps because
they were too busy implementing SQL-injection holes to their server-side
applications which they manually FTP’d over, or developed directly in
production, also known as “the server”. And I guess I was one of those guys.
Then I found Python and game programming.
And as everyone should know, there really isn’t that much code sharing in game
programming. At least if you are trying to explore things and produce things as
fast as possible without too much planning (not that this doesn't to apply to everything else, too). Python was also good for writing
some smaller one-off utilities, snippets and scripts for god-knows-what. It
could also be adopted to replacing some PHP-scripts. But then I found that the
concept of application servers exists, opening up the way for a completely new breed
of web applications (with some persistance without a database, also EventStream and WebSockets). There was so much to explore, again.
I was satisfied with Python for a very long
time. Then I bumped to Unity3D, where the preferred language is C#. A language
much faster than Python, at parts due to strong typing. I wasn’t
immediately a fan: it took a few years to affect my other programming projects. After that I finally acknowledged all these benefits, and for about 3-4
years now, I’ve been trying to (and eventually succeeding to) shift from writing
all my programs, scripts and snippets in Python, to writing them in C#. This was also catalyzed by that fact that I worked on a multi-year C# project at work. And now
I can’t live without strong typing and all the safety it brings. And
auto-completion! The largest, and almost only, obstacle was related to the ease
of writing smaller programs and scripts. With Python I could just create a new
text file, write a few lines, and execute it. Whereas with C# this meant
starting up Visual Studio, creating a whole new project, writing the code,
compiling it, finding the executable, and then finally executing it. The effort
just wasn’t justified for snippets and small programs. But for something
larger, like reinventing Tracker once again, or for extract-transform-load
tools, it began to make sense. (And games of course, but at this time they
didn’t really happen anymore, as I was too focused on server-side.)
Then this all changed almost overnight when
I found LinqPad. It provided a platform to write and execute fragments of C#
code, without the need to even write them to disk (a bit like with Python’s
REPL). This obviously obliterated the obstacle (khihi) I was having, and opened
a way to start writing even those tiniest of snippets in C#, while also
allowing me to use the “My Extensions” feature of LinqPad together with the default-imported packages to share some code between all these programs. And of course I could just individually reference a DLL in each script.
* *
*
Parallel to this is also maybe the fact
that I was now starting to get so experienced, that I had already “tried
everything” and what was left was to enjoy all this knowledge, and apply it to
solving problems :o
With Python it never really got to a point
where I “had” to have larger amounts of shared code or needed to package it
some way. There were few lesser instances of it, but I was able to solve it in
other ways. But now with C# things have continued to evolve, and the need for
sharing code between projects, programs and snippets is becoming a reality.
For why this is only happening now, there
were multiple aspects at play. As I said, almost everything was always
bleeding-edge, and there was no need to version things. In part this was due to
the fact that there really wasn’t any actively maintained code actively
running. Everything was constant rediscovery, and as such there really wasn’t
any code to share. Rather, everything was constantly forked and improved upon
without thinking about reuse too much. Good for velocity. Also, as programming
really happened on only one computer, code could be shared by just linking
other files. Code running elsewhere was either relatively rare, or in
maintenance-only mode.
With age and experience this is now
changing. Or, well… That, but also the fact that I’ve gotten more interested in
smaller and more approachable online-services - services which have the ability
to create immediate value. A bit like in the old times! In order to make those services dependable, a more disciplined approach is needed. While Tracker will
always have a special place in my heart, there seems to be better things to spend
my time on. At the same time another strong reason is that I find myself creating and executing code on
multiple computers and platforms, making filesystem-based approaches
unsustainable.
* *
*
All this has finally pushed things
to a point where I should really be looking into building some kind of
infrastructure to share code between all these instances. For a time, I
leveraged on the ability of LinqPad to reference custom DLLs and was quite
happy building per-project utilities with it. But now my development efforts
seem to be focused all over one smaller slice of the spectrum, with multiple
opportunities to unify code. This is especially important considering the fact
that I’m also looking to make it more viable to both create and execute code on
multiple environments, be it my laptop, work computer or a server. In my
master’s thesis researched on using Docker Compose to simplify running code on
servers. As indicated in the earlier blog post, I’m now trying to continue on
that path and make that task even simpler by utilizing serverless computing.
With both Docker and serverless it becomes somewhat impossible to easily and
reliably consume private dependencies from just the filesystem.
Instead, these dependencies should be in
some kind of centralized semi-public location. And a private Nuget-repository
is just the thing! As outlined in these new blog posts, my idea is to build
some kind of small, easily consumable private ecosystem of core features and
libraries so that I can easily reference these shared features and not
copy-paste code and files all over the place. I started the work of building
this library a tiny tiny while ago, but quickly decided that I should go in all
the way. I’m not saying it will be easy, but it should certainly be
enlightening.
The first set of projects to make it to the
repository are those related to building web APIs. While I’ve on many occasions
planned on using tools like gRPC, sometimes something simpler just is the way
to go. I’m not sure how thoroughly I’ve shared my hatred for REST, but I like
something even simpler if going that direction. This set of projects is just
that. A common way to call HTTP APIs the way I want. Ultimately this comes down
to one single thing: wrapping application exceptions behind a single HTTP
status code and handling them from there. How else am I supposed to know if I’m
calling the wrong fcking endpoint, or if the specific entity doesn’t happen to exist??! Although, as we all know, writing HTTP libraries is a path to suffering. But maybe one with a narrow focus can succeed?
Anyway, maybe my impostor syndrome gets something
out of this. Or not. It seems that everyone else is releasing public packages
and open source code. But I am not. But maybe, just maybe, this will eventually
change that thing.
* *
*
The task of actually setting up a Nuget
repository ended up being easier than I’d like to think. As the first reaction
I thought about using some kind of self-hosted server for it, as I had had much
success with setting up servers for other pieces of infrastructure lately.
Unfortunately, the official Nuget.Server-package one was a bit strange, and I
wasn’t sure if it even ran without IIS. I also didn’t want to spend too much
time searching for alternative open-source implementations. So, I decided to
try the cloud this once! Microsoft has a free 2 GB version of Azure Artifacts
for hosting not only Nuget-packages, but also packages for Node.js and Python.
I decided to test it, and then migrate to a self-hosted one should I need to
save on hosting costs in the future.
As I wanted a private Nuget feed, I first
created a private project (or team?), and then set up Personal
Access Tokens so that I could automatically fetch and push packages with my
credentials. To actually use the feed with the Nuget command-line tool, I ended
up adjusting
my local per-user NuGet.Config file. I think that I also had to first install the Azure Artifacts credential prodiver using a PowerShell script. But anyway. This way I should now be able to use the
packages in any new project, without having to explicitly reference the
repository and access tokens: nuget sources add -Name DeaAzure -Source
"https://pkgs.dev.azure.com/_snip_/index.json" -Username anything -Password
PAT_HERE. I had to manually add to feed to LinqPad, but that was just as easy:
F4 -> Add NuGet… -> Settings.
For actually creating the packages I just
used Google and followed the first (or so) result. This means I wrote a nuspec-files
for the target projects, built them manually and then used nuget pack and nuget
push. I also wrote a script automating this. But, after I later tried doing the
same with a newer version of the nuget CLI I got some warnings. It seems that it
is now possible to specify the package details directly via csproj-files. Apparently
this should also simplify things a bit: now I manually add the generated DLL to
the package, the newer way might do it automatically. It did feel a bit strange to manually how to know how to include the DLLs to the correct directory it the nupkg. I’ll try to blog about
the updated version of this process in more detail later (edit: here). That is, after I get to implementing it.
While I was relatively hopeful about this whole
thing above (though the last paragraph kinda changed that trend), I’m actually not
sure what kind of things I’ll have to do when I want to use the feed with
dockerized builds. I might have to bundle the config to each repository.
Remains to be seen. It will also be interesting to see how my workflow adapts
to both using, but also for releasing these new Nuget-packages. How about
debugging and debug symbols? Will there be build automation and tests? How is
versioning and version compatibility handled? Automatic updates and update
notifications? Telemetry? For a moment everything seemed like it was too easy,
I’m glad it didn’t actually end that way! /s
Edit: it seems that at least the debug symbols of my own packages work directly with LINQPad when using Azure Artifacts. For some reason I did have to manually remove the older versions of some of my packages from LINQPad's cache from the disk in order to use the symbols I included in the newer packages of mine (the older ones didn't include symbols, yet the newer one had a different version number). But after that they worked :)
Edit: it seems that at least the debug symbols of my own packages work directly with LINQPad when using Azure Artifacts. For some reason I did have to manually remove the older versions of some of my packages from LINQPad's cache from the disk in order to use the symbols I included in the newer packages of mine (the older ones didn't include symbols, yet the newer one had a different version number). But after that they worked :)
No comments:
Post a Comment