tag:blogger.com,1999:blog-70739768368484981672024-03-08T13:31:52.247+02:00Ramblings of VazdeOverengineering. Complex thought processes. Self-discovery.Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.comBlogger81125tag:blogger.com,1999:blog-7073976836848498167.post-82674419592591419852022-12-28T15:22:00.000+02:002022-12-28T15:22:43.119+02:002022 in review: IoT, ray tracing and web<p style="text-align: left;"><b>Another year</b> has come to pass, and it’s time for some more reflection! Despite its many flaws, the (only? :s) one way this year has been a successful one is through all the code I’ve managed to write. So without delving more into all the melancholy, let’s jump right in on what really matters in life and think of all the great projects I’ve had:</p><p style="text-align: left;">I started the year with embedded development in C#, then did two intensive months of game development - starting by learning Vulkan and ray tracing from scratch, learned Kubernetes and Terraform, a bit of cloud, worked on my homepage, and then did a bit more of embedded stuff by writing some programs to monitor and control factories in Minecraft.</p><h2 style="text-align: left;">Embedded .NET with Meadow</h2><p style="text-align: left;">For the first half of the year I got back to embedded development and managed to drastically improve the UX and reliability of <a href="https://blog.dea.fi/2022/04/adventures-in-making-net-iot-timer-with-meadow.html.html">setting short timers</a>. This is all thanks to the Meadow platform having reached a much better maturity compared to how it was the year or two before, when I first started working on it. I even wrote <a href="https://blog.dea.fi/2022/04/adventures-in-making-net-iot-timer-with-meadow.html.html">a post</a> about the adventures I had when designing and implementing that system. There’s also a short Youtube clip illustrating how it’s all used in my daily life. At that point I was feeling really good having “shipped” something useful, and moved on to the next topic. Before this (for many many years) I had used the same launcher, but the countdowns were opened to a browser window.</p><div><p style="text-align: left;">Plus, over time I’ve gotten really used to the system, and thought that I couldn’t ever live without it. This assumption was however challenged a month or two ago, when I upgraded the firmware on the device running the display hardware, instead of the other device I was supposed to develop serial port things on. But what the hell. Let’s just upgrade it to the new OS and get all the promised stability and performance upgrades while at it. Maybe those would even eliminate the very rare restarts when a watchdog timer set by me times out.</p><p style="text-align: left;">One of the big new features of that new release was the inclusion of a new linker, which is able to trim out unused code, greatly reducing both the code size and the startup speed. Unfortunately, this specific linker isn’t implemented the same way as the one by Microsoft in the full .NET SDK, and there seems to be no working developer-facing options to alter its behaviour yet - it's all beta. The way it’s currently implemented <a href="https://github.com/WildernessLabs/Meadow_Issues/issues/245">strips out</a> code needed by reflection-based serialization, leaving me stranded with my API calls.</p><p style="text-align: left;">One could think that I could always go back to the old OS, but the matter has two obstacles. The first being the tooling, as it is still under active development, and there’s no apparent way to install a previous version. Though I highly doubt it’s been blocked yet, and there’s a responsive Slack channel to ask about these things. But the second one is quite embarrassing. The last time I worked on the codebase, it was when I moved the server side stuff to my Kubernetes server (more on that later), and it turned out to be a more involved operation than originally foreseen.</p></div><div><p style="text-align: left;">When I finally got it to work, I was rather frustrated, and seemed to have <i><b>forgotten</b></i> to make the final commit - but there’s really no excuse for this kind of irresponsible behaviour. So after I had done the OS update and the code updates to go with it, I no longer had a working version to go back to. Rather than taking the time to fix things, I’ll just have to live with my mistakes until the new OS is at a serviceable state. At least I’ve had plenty of things to keep me busy and not thinking about it. Quite the theme overall…</p><h2 style="text-align: left;">Vulkan and ray tracing</h2><p style="text-align: left;">The timing for my next expedition couldn’t have been better. After I had neatly finished the work on the timer (but before the OS update), there was a bit of time to relax. How unheard of. When it was time for my summer holiday, I was already anxious to try something new. For a long time I’d been meaning to learn Vulkan, and now I also had the spirit and the time for it. And not just Vulkan, if things would progress well.</p><p style="text-align: left;">I think it took me about a week to go through most of the <a href="https://vulkan-tutorial.com/">tutorial</a>. I had originally anticipated it to take the whole month, so my spirit was further elevated by that triumph! <a href="https://blog.dea.fi/2022/07/ubg-vulkan-and-hardware-ray-tracing.html">As detailed</a> in the relevant post, one of my long-lived dreams has been to build a game with realtime soft shadows. I’ve always thought the math for it to be beyond my abilities - except now. Armed with Vulkan and powerful hardware, perhaps I could just brute force my way to it.</p><p style="text-align: left;">And it all started rather promisingly. I learned more than I could have hoped for, and even got good results on the screen, at interactive frame rates.But the quality fell short of what I’d hoped for. And with that, so fell my motivation to work on it. Despite this, I still managed to achieve some rather great things after the initial blog post, namely multiple importance sampling (MIS) and the integration of Nvidia Realtime Denoisers (NRD). Both improving the quality and performance by at least an order of magnitude or two. Almost enough. Almost.</p><p style="text-align: left;">It sucks that I haven’t documented my journey better than I did, as this is the most impressive thing I’ve achieved in a long time. There’s <i>some</i> <a href="https://twitter.com/routaverkko/status/1568921549977096193/photo/1">nice screenshots</a> on my Twitter page, and some short compression-butchered <a href="https://www.youtube.com/watch?v=7MdXSyMpVDQ">video segments</a> on my Youtube page, but that’s it.</p><p style="text-align: left;">But the reality of it is that the intensive ~two months I worked on the project really burned me out for a long while, despite all the impressive things I achieved. I really enjoyed what I did, but everything has its limits. Though truth be told, RT is <i>perhaps</i> something that I don’t really wish to pursue more. It was very rewarding, but as a concept it’s not really something that interests me that much - I was really only after the results. I thought that simple 2D shadows would be easy to implement with the new technology, but the truth is that they are actually even more complex than 3D ones, at least when implemented the way I approached them. Sunken costs kept me engaged.</p><p style="text-align: left;">But when it comes to using Vulkan as a graphics API, I really did enjoy it a lot. It’s so much nicer than OpenGL, as it’s without all the technical debt of immediate mode rendering and old hardware pipelines, while explicitly supporting all the new things that make development easier and the code more performant, such as pre-compiled shaders and cached pipelines, and multithreaded API design. Though it’s still not easy to use it optimally by any means - especially when it comes to supporting heterogeneous hardware. I already liked OpenGL, and I like Vulkan even more.</p><p style="text-align: left;">A few years back I made <a href="https://twitter.com/routaverkko/status/1456221531428823041">yet another</a> version <a href="https://youtu.be/6ybSwYOb85o">of my</a> unicorn game project - codenamed Unnamed Spacefaring Game: Energized (with modern OpenGL and C#) - and I thought that the code I wrote for it was diamond-grade. Not quite. Or maybe it was, compared to how the previous iterations were implemented on top of a decade+ old Python codebase. I also had <a href="https://www.youtube.com/watch?v=K2UDjoUNE_U">big plans</a> for writing the engine side in such a way that it would be able to run efficiently on many-core systems. This wasn’t easy, and it was further complicated by the APIs offered by OpenGL. The solution I arrived on was ambitious, and I feel really bad I never got far enough to really test it in battle.</p><p style="text-align: left;">Vulkan on the other hand has native support for many frames-in-flight and multiple descriptor sets. These features could be used to <i>drastically</i> simplify the resulting architecture, perhaps even completely eliminating the need for all those ambitious inventions I had. I’m very conflicted on how to proceed. Everything sounds almost too simple. Though I guess there’s still some demand for building infrastructure for better multithreading, but the big picture has warped so much that the old plans don’t apply anymore, and I’d have to spend more time planning against that new future to be able to say anything conclusive.</p><p style="text-align: left;">But what I do know is that the code I wrote for that RT Unnamed Building Game was a lot better than that of USGE. I’m really hoping to get back to UBG someday, as its scope is also a lot smaller than USGE’s; if I’d be able to keep the idea alive, yet yeet the RT things, it would be something that could realistically be shipped, and perhaps sold. Though, without RT most of the novelty and graphical appeal wears off…</p><p style="text-align: left;">I guess it’s all about managing scope; it would also be possible to ship a version of USGE with only a few months of development. It would be a technology demo, but a fully playable game nonetheless. But there’s been enough gamedev for now.</p><h2 style="text-align: left;">Homelab and custom auth</h2><p style="text-align: left;">As told above, I had used up (<i><a href="https://www.instagram.com/p/CGJ-fzAHDH3/">almost</a> every waking moment</i>, I might add) of my summer holiday for learning Vulkan and RT, and there was still a lot to do. Meanwhile at work the then-current project had evaporated over the holiday, and for a moment I was left without one. This gave me the energy to continue learning, but still I faced that eventual burnout. This “opportunity” meant that I could pivot my focus at both work and at my own time to learning about Kubernetes, and forget about all the unconqueable things in ray tracing.</p><p style="text-align: left;">Plus, yet another of my long-time dreams had been to have a solid infrastructure for running self-hosted software - both of my own making, and by other people. And doing it securely. So I didn’t really mind the literal double-timing of learning new things so soon again.</p><p style="text-align: left;">And a lot of new things I learned. It took a bit of iteration, but now I have a somewhat satisfactory hierarchy of Kubernetes manifests, and a small-yet-equally-satisfactory collection of services described by those manifests. All hosted on an ebay’d <a href="https://www.servethehome.com/introducing-project-tinyminimicro-home-lab-revolution/">tinyminimicro</a>-sized server I bought just for this purpose. The server's awesome: it's tiny, x64, and consumes about 6 watts at idle. Pity it has only 8 GB of RAM, and doesn't have the PCI-E connector for a 10+ Gbit network card soldered in. Also, I’d still like to do the manifests over one more time - this time using Terraform for avoiding some repetition on specific parts of the infrastructure. A bit more on that in the next section.</p><p style="text-align: left;">I had hoped to create a VLOG episode about all things homelab, but that didn’t happen this year. But what I instead have is a <a href="https://www.youtube.com/watch?v=RgNFwJUKNMs">two</a>-<a href="https://www.youtube.com/watch?v=t1e72YacwTQ">part</a> special about one facet of it - namely authentication and authorization. I’ve materialized a lot of my dreams this year, and the theme continues :)</p><p style="text-align: left;">I set up a monitoring stack with Postgres, Grafana and Prometheus, and migrated some of my own server applications to the new hardware (countdowns, hometab, …). I also set up an instance of <a href="https://github.com/dgtlmoon/changedetection.io">changedetection.io</a>. All of these things benefit from unified access control in the form of single sign-on (SSO). As illustrated in the first part of the VLOG, I had initially set up Authelia for this task, but it was not to my satisfaction.</p><h3 style="text-align: left;">Moonforge</h3><p style="text-align: left;">So I did what I do best, and made my own. While this is a bold move even by my standards, I kept things realistic by having Keycloak do the actual user authentication, and having my service continue from that. It’s called Moonforge. All it does is minting and validating JWTs based on configurable policy. There’s more information about this in the second part of the VLOG.</p><p style="text-align: left;">But in short, Authelia felt really nasty because it had a single login cookie for everything. With Moonforge, the user logs in to Keycloak, and then to Moonforge via OIDC, and afterwards to the actual applications with OIDC-like semantics, even if the applications don’t natively support it, thanks to special reverse proxying routes spanning some paths even in each application’s own domain. The JWT cookies for these logins are application-specific, and can’t be used to access other applications unless the user specifically allows it on a case-by-case basis.</p><p style="text-align: left;">I really think that having my own service for auth is for the best - as long as I haven't made a mistake in something elementary. As further detailed in the VLOG, the alternatives are just not that appealing. Authelia requires me to fully trust the applications it protects, and standalone Keycloak is too complex to configure: I’m bound to make a mistake. If I build my own, I know for sure how to configure it. And since I’ve been doing backend stuff for a long time, I hope for everyone’s sake that I’ve learned to build a simple one that is reasonably secure. After all, the harder parts are still done by Keycloak - for now. And I’ve already spent enough time looking for ready-made solutions.</p><p style="text-align: left;">Besides, having an <u>auth(n|z)</u> solution of my own has been yet another long-time aspiration. I think I did some initial UI planning as far back as 2015, and it took until now to implement a system. It looks nothing like originally envisioned, but that is partly due to how the focus so far has been on authz instead of authn. Plus, I haven’t really decided whether I want to let it have its own visual identity, or whether to have a more standardized appearance around my core css.</p><p style="text-align: left;">While the visual identity is a definite nice-to-have, the actual technology behind the scenes is what it’s really all about. In its current form it replaces Authelia with something more secure, without offering much more than that. The foundation I’ve built everything on feels really solid and well grounded. The cookie paths and domain are sensible, and there’s been effort in building everything with a focus on security. There’s extra care in JWT validation, and things like embedded keys and alternative signature algorithms have been explicitly disabled - strange how some JWT libraries don’t even have the option to disable these features. And the small scope has definitely helped.</p><p style="text-align: left;">As I’ve mentioned many times now, for a project of this scope, having Keycloak be part of it is sensible. But I’m hoping to find the time to replace it with my own implementation. When I was still working on Tracker, I implemented a login flow similar to how for example WhatsApp Web works now. That type of passwordless flow was really nice, and I’ve wanted to replicate it in a more generic setting, while also adding some other security features. Hence the dream. In the VLOG I talk about all the things I’d like to add and how they’d strengthen the security of all the services relying on the system - pushing more and more of the auth functionality on to better trusted companion devices, outside the device and the server the login is being performed on. There's also plans to harden the surface which mints the JWTs by requiring input from a trusted device. I’d also like to get hands-on experience interfacing with security keys such as Yubikeys.</p><p style="text-align: left;">Moonforge is far from being feature-complete, but it’s perfectly serviceable for its current task. I feel good, and with that 'final part of the 1st stage' of my Kubernetes infrastructure implemented, I now have what I wanted: a stable foundation for hosting and building self-hosted services and applications. Next up is Terraform, better uptime monitoring, perhaps distributed logging and most importantly improved machine-to-machine authentication. And maybe even making more VLOGs on the topic.</p><h2 style="text-align: left;">Terraform, cloud and static HTML generation</h2><p style="text-align: left;">With the things at home looking so good now, why stop there? The next thing I could be improving is my (still perfectly functioning) decade+ old <a href="https://dea.fi">homepage</a>. The current site is hosted on a single server, and the PHP in it dynamically serves WikiCreole-based content. I’ve been meaning to replace it for the past two years with static generation and geo-replicated hosting, but the time hasn’t been ripe. Except now*.</p><p style="text-align: left;">There’s been demand at work to learn more about the cloud, and I was able to use this to my advantage. I was required to set up a static website on AWS S3 and CloudFront, so why not use what I learned for the <a href="https://uusi.dea.fi/">new site</a>? And while at it, why not do it <i>properly</i>. This is again a topic that would have made a great VLOG episode or a post of its own, but I’ll <i>try</i> to be brief.</p><p style="text-align: left;">At work I was given Terraform to accomplish the task, and I quickly fell in love with it. It allows for writing all the infrastructure as “code”, a bit similiar to how Kubernetes manifests work; precise and to some degree self-documenting. And making changes is easy, as the user doesn’t have to know the exact commands, they only need to tell how they want the end result to look like. It’s a really nice way to work.</p><p style="text-align: left;">The best advanced feature of Terraform are the “modules”, which allow for defining complex reusable components, which in turn expand to a set of simpler pieces of infrastructure. While that’s an overkill for just a single static website, I’m looking forward to migrating all(?) my Kubernetes manifests to Terraform. While raw manifests are mostly fine, the biggest pain point for me has been in defining firewall rules and HTTP(S) ingresses. These are very verbose, and contain a lot of dupliction between each other. With Terraform I should be able to write a single custom <u>utu_http_ingress</u> module, which defines all those at once. There’s also some special additions that need to be done to each Traefik (the reverse proxy) HTTP ingress route entry in order to support the special login paths needed by Moonforge - having the module transparently do this in a centrally defined location is awesome.</p><p style="text-align: left;">With the basic infra up, the more fun part was uploading the content. I could have used some type of shell script and <u>aws-cli</u> to upload the site content, but it turned out to be really easy to use the SDK and build my own program. With it I could easily incorporate some extra steps, such as only uploading the files that have been changed / renamed between each run, and more importantly transforming the uploaded filenames and paths. The static site generator includes the .html file extension on all content, and I was trivially able to remove that. There was also some special logic on how “index” pages and paths worked with the PHP site, and I was able to take that into account, too.</p><p style="text-align: left;">And lastly, I learned that CloudFront is able to compress the content it serves, but this feature depends on the file types and the available CPU time on each edge server. The only way I can ensure that the content is always served compressed is to save it pre-compressed to S3, and then return Content-Encoding header with the correct value for each request. But now that I control the upload process, this was no problem :)</p><p style="text-align: left;">And speaking of content, that’s a topic for another post, again. A few years back I stumbled on <a href="https://www.statiq.dev/framework">statiq</a>, and found it to be the least-sucking tool in the .NET world, so that’s what I chose and started to slowly port over the site itself, and the content. Instead of WikiCreole and PHP I had Markdown and C# HTML compiled down to plain HTML.</p><p style="text-align: left;">With this new technology stack I could also accomplish some of my long-time desires for ever so slightly improving content-discoverability on the site. Along with the old stuff, the site also serves as the companion for my VLOG, containing the scripts, some technical specs and other commentary on each episode. I wanted to make it easier to move between different episodes and projects, so I implemented a <a href="https://uusi.dea.fi/vlog/s02/e05">sidebar navigator</a> for this.</p><img src="https://lh5.googleusercontent.com/2amz6D8cRTJ_BU7JMPNAY2ZppbD4Ph6P7SiO-0Fe-iVGgWwAXKX3SxNfr1fvgJzJG_nhppd0YLhTTW49J2OiPwp-5gPoj9jBTpf9fpZReZ5XDfILMMGFKt8iAco_OPqNTt_Xmfu5UwIQfKHZs6MfyzECoM0JuAbPcXB_kvQoxnR5vmSpHmi87fFkVUZGAQ=s16000" /><br /><p style="text-align: left;">Site generation in statiq is a multi-stage process, with one of the stages being content discovery and “front-matter” parsing. This way each page knows what other pages exist, and I’m able to pull similar pages from the metadata to the sidebar - either by the folder structure, or by a special grouping key. This metadata is also used to build navigation breadcrumbs.<img src="https://lh4.googleusercontent.com/9j-kxByOeF_xNe5lnP8ht_kRRpNoJWIK63OsKBm9823BLLTePBA8bgy1RnaZF1GwKOTBmXlLvnP8pMY-J1IiI6H_uCDykfV7b0jE49JuQZbAVVybQ3CmIsCfYgJlh6XdKeA78YI88yddd6JjaSgbtkW99kAE8eLXBWnZC3kzvpN0cR0MZwMEpGk0Z_ur5g" /></p><p style="text-align: left;">*The tool still sucks, though. It tries so hard, but fails. There’s a live-reload server for avoiding full rebuilds when working on individual pages and templates, and a caching system so that those full rebuilds can be avoided also when publishing. Unfortunately, both have been broken for several years. The live reload server needs to be restarted whenever a page containing an error is opened (so don’t save a live-reload page until the code compiles!), and the caching system doesn’t purge the metadata cache, so pages appear multiple times in it, breaking the sidebar.</p><p style="text-align: left;">I was going to sidestep all that by making a hot-reload system of my own, by building on the “hot” reload technology I invented for USGE and UBG and UBG's asset pipeline. Newer dotnet supports unloadable assemblies once again, so it’s possible to keep the compiler and dependencies loaded in memory, but still dynamically reload all the user code - a sub-second operation. That way I could delete the cache between rounds, and avoid the server dying whenever an error occurs, by simply very quickly restarting it after each run instead. Unfortunately statiq always starts a text-based debug console meant for live-reload even when not using it, and never cleans it when exiting, thereby failing to run again in the same process. The relatively complex code which does some high-level orchestration of the generator is rather tightly coupled with the code controlling the debug console, so it’s not an easy task to separate these two. Even cleaning up afterwards is not trivial. FML.</p><p style="text-align: left;">Despite this, the preview for the new site is <a href="https://uusi.dea.fi/main">live</a>, and the DX is a bit better than before. And it’ll get better, won’t it? And if not, I’ll just have to make my own generator. There’s still work to be done in upgrading <a href="https://uusi.dea.fi/projects">imagepaste</a>, perhaps the contact form, and transferring over some other static content. I’d also like to improve the page load latency, so I’ll have to find a way to cache the CSS file, yet reliably invalidate it without rebuilding the whole static site. That might be impossible to do without scripting. But for now, at least the primary content has been migrated succesfully.</p><h2 style="text-align: left;">Minecraft and Lua</h2><p style="text-align: left;">It’s been a tight year, and I wanted to take my mind off other things, so I found myself gravitating towards modded Minecraft once again. Plus, it’s just so damn addictive it’s hard to keep away for too long.</p><p style="text-align: left;">This time I’ve been playing <a href="https://www.curseforge.com/minecraft/modpacks/create-above-and-beyond">Create: Above and Beyond</a>, a non-expert modpack with strong progression. The focus is on Create, a “low-tech” automation mod, which has large moving machinery which in my opinion fits the theme of Mincraft really well. As the technology level keeps increasing over the playthrough, some products benefit from automating them with programmable in-game computers. It’s also fun to monitor some production chains, such as the amount of charcoal fuel.</p><img src="https://lh4.googleusercontent.com/JNEU8tBukGsHEKXedOsfgJWABfNScJ_Flp3ABsQx9EZxD_tZH4WGvrIA4dmmJyqnD6ppRqppSxLFSP_goENVUXSlMSyvyoucQiR_lZlZGA7r4mY2TMNvfRoot39Hz2-0FV-tat7S-DB68FicRhHAxGVlS0zkJA91Vg1QERop8habyrh9NwYt2mDUJx4PRQ" /><br /><p style="text-align: left;">Few years ago I already built some libraries for saving metrics to Postgres (with plans to migrate to <a href="https://www.timescale.com/">TimescaleDB</a>), so this time I was able to build an in-game Lua program, a metrics gateway server and a Grafana dashboard in two short hours :) This was a good primer for the more complex factory-controlling programs.</p><p style="text-align: left;">While the modpack is heavily focused on doing things with just Create, some things would get needlessly complicated when done with just in-world redstone circuits. Just like I really enjoyed describing infrastructure with Terraform, I also enjoyed avoiding complex circuits with <a href="https://gist.github.com/Vazde/a90a233bc673fadef7254069ea58c38b">Lua code</a>. It’s a lot more self-documenting, and the code itself is creeper-proof :p Some might argue that the beauty of the game is using those simpler primitives for achieving the end goal, but at some point it just becomes plain frustrating. Though in this case I’m not very good with Lua, and I’m not even trying to be: I know the bare minimum, and use that to make the programs. I guess that’s my way of playing the same game.</p><p style="text-align: left;">And as it was beautifully noted to me, I guess I’m in the right industry when even relaxing and playing video games leads me back to writing code. Or alternatively, I’m just a sick and troubled mind, which can’t ever truly let go and just <i>relax</i> (:</p><h2 style="text-align: left;">Closing words</h2><p style="text-align: left;">Whoa. It seems I’ve been <i>really</i> active this year. Maybe too much so. But, <i>for now</i> my only real regret is that I haven’t documented my doings even better. This post ended up a lot longer than I first anticipated trying to cover even the majority of what I’ve done. Especially the gamedev things. But at least I have something to show for it, and I did at least try with Twitter.</p><p style="text-align: left;">Thanks for reading! If you are - for some strange reason - after for even more to read, can I recommend you the <a href="https://blog.dea.fi/2021/10/blog-15th-year-anniversary.html">15 year anniversary post</a> of the blog. It’s a lot shorter, I promise!</p><p style="text-align: left;">PS. I’m also working on a little something to highlight all the wonderful private code commits of mine, and also my other online doings in a central place. Stay tuned.</p></div>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-59869588575120914202022-07-07T14:00:00.005+03:002022-07-07T14:24:21.114+03:00UBG: Vulkan and hardware ray tracing<div><b>Exciting news!</b> In <a href="https://blog.dea.fi/2022/03/vlog-season-2-sad-progress-report.html">an earlier post</a> I described my conflict with doing stuff and having something to show for it. Well, it seems that I managed to get over it - at least partly. The past 1.5 months or so I've been rather busy learning Vulkan and hardware ray tracing, and have gotten rather far already!</div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjGxY4uWPoerprp_tnA4UT0UXJiongQO4WcFeo8H_EsbdcgrJUQOg5o0UR3l3bWBea9UhAKziUihYyuqJbor-Ti-KeWVVeiwkd9_CPUdHEcyaTFRsXx-Mklg1qph847a_IGJk9bxsfiUSzk7Ze3664yurPjGa7kl07qFJWarMgJuqE0rtLDy7TPR8-p" style="margin-left: 1em; margin-right: 1em;"><img data-original-height="1032" data-original-width="1202" src="https://blogger.googleusercontent.com/img/a/AVvXsEjGxY4uWPoerprp_tnA4UT0UXJiongQO4WcFeo8H_EsbdcgrJUQOg5o0UR3l3bWBea9UhAKziUihYyuqJbor-Ti-KeWVVeiwkd9_CPUdHEcyaTFRsXx-Mklg1qph847a_IGJk9bxsfiUSzk7Ze3664yurPjGa7kl07qFJWarMgJuqE0rtLDy7TPR8-p=s16000" /></a></div></div><div>Though this time I had a bit of help. Instead of doing <i>everything</i> from scratch, I decided to build upon the assets of a Finnish Game Jam 2015 game I was part of, <a href="https://youtu.be/RGFZlr2JhyI">WatDo</a>. Btw, that year was a bit special. I didn't code a single line of (game) code, focusing solely on building the game tilemap with Tiled. Anyway - let's start by briefly talking about doing the new things, and then about the things themselves: learning Vulkan and RT, and also other engine-level stuff such as asset loading and abstractions.</div><div><br /></div><div>I got inspired (more on a later post?) to finally build my other long-time dream - a game with dynamic 2D soft shadows and diffuse global illumination. Considering everything, I thought that my best bet was to learn hardware ray tracing, which in turn meant learning Vulkan. I had zero knowledge on RT, but I had meant to learn Vulkan on many occasions, as that is a modern high-performance graphics API with many improvements compared to even most modern OpenGL. Especially in the area of multithreading, which is a major focus area on my new "engine" in USGE. I was still rather sceptical of RT, but decided to move on with learning Vulkan first, as that was the pre-requisite and would be very useful by just itself, too.</div><h3 style="text-align: left;">Learning Vulkan</h3><div>As I knew very little, it was easy to just start working through <a href="https://vulkan-tutorial.com/">a tutorial</a>, and having the tutorial to speak for itself. And for the longest time there even wasn't anything that could have be shared, as it was all just initialization and more initialization. In the end it took me two weeks to complete the tutorial and have a multisampled and textured triangle on the screen.</div><div><br /></div><div>Quite a pleasant surprise, actually. A long time ago when I initially stumbled upon Vulkan and that very same tutorial, I estimated that it would take at least a month or two to complete it! And now I finished it in just two weeks! I even managed to backport my code hot reloading system, and with the new explicit APIs of Vulkan (and my own GLFW windowing code), it was slightly easier to accomplish it than with USGE which used OpenGL and Silk.NET.Windowing. Of course the initial discovery work done on the topic was extremely important; I'm hoping to share some details on a VLOG post later. But no promises.</div><div><br /></div><div>The tutorial I used was perhaps intentionally focused on NOT building abstractions on top of the Vulkan functionalities, so after I finished the tutorial I slowly began building those missing pieces for the most common functionalities, concurrently with learning new stuff. But with an asterisk. I intentionally tried not to abstract too many things, as I still know very little about Vulkan and the use cases. I've also read too many horror stories on just building engine abstractions and losing all will to continue after that work is "done". And in the olden times that's exactly how I rolled, and it was glorious! Cool stuff can be done even without great abstractions, to a point. So let's do just that and see where we end up. Especially when there are some rather major engine-level things I'm yet to do (and don't yet know how to best to them), and all those can affects things by a lot.</div><h3 style="text-align: left;">Learning ray tracing</h3><div>Confident in my abilities and the success with Vulkan, I started learning about ray tracing. A task I was expecting to fail. But after reading a few papers and watching a couple of videos on the topic, I managed to astonish myself even more. It took me only a week to produce an image containing ray traced elements, and that's on top of the work spent on tilemap rendering stuff. Some of the more helpful videos were:</div><div><ul style="text-align: left;"><li><a href="https://www.youtube.com/playlist?list=PL5B692fm6--sgm8Uiava0IIvUojjFOCSR">Nvidia ray tracing essentials</a> (theory)</li><li><a href="https://www.youtube.com/watch?v=PUhhRNleDe0">Bringing ray tracing to Vulkan</a> (for general overview)</li><li><a href="https://www.youtube.com/watch?v=QWsfohf0Bqk">Advanced RT stuff</a> (and part 2 also)</li><li><a href="https://www.youtube.com/watch?v=Q1cuuepVNoY">Introduction to DirectX raytracing</a> (more hands-on with the APIs)</li><li><a href="https://www.youtube.com/watch?v=12k_frqw7tM">Lecture on Vulkan RT APIs</a> (Vulkan version of the APIs)</li><li>Finally, <a href="https://www.shadertoy.com/view/lltcRN">a Rendertoy shader</a> which helped me understand the theory of 2D</li></ul></div><div>But three weeks. For learning Vulkan and ray tracing from scratch. I'm extremely happy for such and accomplishment.</div><div><br /></div><div>And at this point I was eager to publish something, too! Something small. But shooting a whole VLOG post still felt a daunting task, and even writing a blog post would have been too much. But a microblog was the right size, so I <a href="https://twitter.com/routaverkko/status/1535227976392097793">tweeted about</a> my accomplishments :)</div><div><br /></div><div>I continued working on RT at a great pace, and and shared a few more images on Twitter in a short timeframe. The images also deserved some kind of descriptions, so I did my best while staying inside the 260 character limit. But that felt too restrictive, and I yerned for a good old blog post. But that was too much. So I stopped posting, and turned my full attention to just doing stuff.</div><div><br /></div><div>And just kept on doing stuff. While the initial ray traced images were easy to produce, further improvements were increasingly harder. At some point I finally managed to decide that RT things had reached a checkpoint, and I could / should / had to work on some other areas for a change.</div><div><br /></div><div>The final upgrades were about adding glowing things on the map, and the process of producing the final map meshes was taking longer than I was happy with. Short iteration time is my thing, and now it was broken. But hope was not lost, no way.</div><h3 style="text-align: left;">Porting the asset system</h3><div>For the previous iteration of my "unicorn" game project USGE, I had produced an asset pipe system which handled asset loading and metadata, and generated an Android-inspired R-file. The system was also meant to do initial asset pre-processing automatically, but I hadn't gotten around to implementing it just yet. And now I clearly needed it, so I got to work.</div><div><br /></div><div>I started by drafting a fluent API for configuring such a processing pipeline, while on a train. But when I got to implementing it, I encountered something shameful. While I've started to feel confident in my own programming abilities, the type of object-oriented principles and especially the soup of generic constraints needed to implement the API in a compile-time safe way eventually proved to be too difficult :( Or at least in that state of mind I didn't manage to finish it. So I took a step back and implemented the configuration API in a way that was completely validated only at runtime, and got the stuff done at least. A great psychological win, actually.</div><div><br /></div><div>In the end I now have a system where I have a bunch of .meta.json files in the assets directry. They contain pipeline configs and references to concrete files. The pipelines themselves can then transform the definitions and loaded assets, and finally produce a set of asset definitions and data files for the game to load. And the R file plus an accompanying asset manifest.</div><div><br /></div><div>In the previous iteration I had all the metadata codegen'd into the R-file, but that was so much work. So now that metadata lives the manifest file which is read at an early phase in application startup. The manifest itself mostly just contains file names for each asset, but can also contain special instructions for the asset loaders (like precomputed image sizes). But mostly it contains stuff required during development for asset "hot-hot" reloading (yet to be implemented).</div><div><br /></div><div>The asset loaders themselves work in parallel most of the time and use the manifest to know what to load. For example in case of the textures:</div><div><ul style="text-align: left;"><li>Allocate Vulkan image handles and get memory requirements; pixel size known via manifest (currently non-threaded, but rather easy to improve).</li><li>Allocate one large buffer for image data.</li><li>Load all images in parallel from disk (or later on, from memory). This includes IO, parsing, GPU uploads and mipmap generation.</li><li>Cleanup staging buffers.</li></ul></div><div>Asset loading is especially a thing where I'm most satisfied with Vulkan's multithreading support. Things just work out of the box, no magic required.</div><div><br /></div><div>Oh, and all the different types of assets are loaded in parallel, too. The loading order is guided by the manifest. It doesn't typically matter, but profiling showed that map data took a lot longer to load than any other asset, so I implemented a special configuration option to enable marking some assets as "Expensive". They are loaded first, meaning that I save about 20ms of loading time with that simple change. The very first loader thread starts to load the map, and while that is happening all the other parallel threads manage to finish their work. If on the other hand the other assets were loaded first, they could finish loading a bit faster, but we'd end up waiting for the big asset for a lot longer:</div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEj_vU7BrvnC_SNo91KBOhwHZkCPDl_o2twfMp1aa74NWoDROpROylsWYY1V8NU0lErV40XYbjqd8cdQHiw16AWzlzgsEpRA8Q_TI4i419X-1aVV16vJGkl8SjBM1a5wkvEau14MtWyzXuekWmsB8nQalLMNKHMQeus-RhdN0ohWw7aneOYgORd6u9zO" style="margin-left: 1em; margin-right: 1em;"><img data-original-height="416" data-original-width="454" src="https://blogger.googleusercontent.com/img/a/AVvXsEj_vU7BrvnC_SNo91KBOhwHZkCPDl_o2twfMp1aa74NWoDROpROylsWYY1V8NU0lErV40XYbjqd8cdQHiw16AWzlzgsEpRA8Q_TI4i419X-1aVV16vJGkl8SjBM1a5wkvEau14MtWyzXuekWmsB8nQalLMNKHMQeus-RhdN0ohWw7aneOYgORd6u9zO=s16000" /></a></div></div><div>There's a lot of things I could improve with the asset system, but for now it is good enough allowing me to focus on other things.</div><h3 style="text-align: left;">Other load time improvements</h3><div>While the load times were the driving reason of the asset system, the system of course also has its primary use :p But performance is a nice focus area. And in <i>parallel</i> to working with the finishing touches on the assets, I also worked to improve other things which impact the loading times. As I briefly mentioned, I've build a hot-reload system for code. When the game host starts, it creates the desktop window and sets up the Vulkan context. Then it dynamically loads the game's assemblies and executes the code. When a change is detected, the same window is reused, and the code is reloaded. Initially the host was not aware of the assets, but I've since improved things, allowing me to cache the them.</div><div><br /></div><div>Also, I managed to improve the reload process in the host by parallelizing assembly reloading, Vulkan context recycling (not required, but helps to point out resource leaks) and asset manifest processing. This alone saved me about 300ms.</div><div><br /></div><div>Currently the asset caching is limited to just the file data, but even that yielded an improvement of about 100-200 ms. The rather unoptimized map asset is 150 MB, and sadly takes a while to load from even an NVMe disk. But by caching the data in the host, that can be skipped. Thanks to the checksums built into the manifest, the host needs to re-read only the files which have been changed between reloads. When the game code itself runs, it can use the data from memory, skipping the disk reads. As a final tiny optimization the cached asset data is allocated in long-lived pinned arrays, hopefully reducing the GC pressure by just a tiny bit.</div><div><br /></div><div>I'd like to improve things by a notch more by having the host keep the textures themselves in memory, but currently it's actually not worth it, as the multithreaded loading is already fast. Plus it would greatly complicate recycling that Vulkan context. The slowest asset was the map data, and now it's fast thanks to being cached. So now all the assets are loaded in about 50-100ms via the cache.</div><div><br /></div><div>After the assets (including shader binaries) have been loaded, the game can concurrently compile graphics and ray tracing pipelines (latter taking 250ms even with a pipeline cache!) and building the initial ray tracing acceleration structures. And after just the pipelines the game can also start to initialize (later on with even more threading) all the game objects while waiting for the AS build.</div><div><br /></div><div>In the future I could further optimize things by starting to build the AS immediately once the map asset is loaded (before textures), and also the pipelines could start compiling right after the required shader bytecodes are loaded. But at this point I deemed best not to complicate things too much. Oh, and of course I'm yet to optimize the map mesh itself. There's A LOT of shapes that could be merged. There's also no index buffer yet.</div><div><br /></div><div>Probably forgot something, but with all these changes I managed to cut the total reloading time from almost 4 seconds down to about 600 ms, so it's almost instant again. The initial load takes about 2.6 seconds, but thankfully that needs to happen only rarely. So about 1 - 1.5 seconds from pressing "Build" in Visual Studio to reloading all game code and assets and getting the first new frames on the screen :)</div><div><br /></div><div>It's a joy to develop with an iteration time that short.</div><h3 style="text-align: left;">Further RT work</h3><div>With the basics in check once again, I delved to another must-have feature: ray tracing denoising. Initially I tried to do my own filtering and got surprisingly close in building a pseudo k-nearest filter, but the performance was awful. When I later read more about the techniques it seems that I was very close to how the things should be done properly. Instead of having and adaptive window size in a single pass, multiple smaller passes should be made.</div><div><br /></div><div>After giving up on implementing my own filter I managed to gather enough courage in trying out Nvidia's Optix denoiser. That unfortunately was only available via their driver, and required using their C++ SDK. Before committing to it fully, I did some manual work with their command line sample. Exported some frames from my game, converted them to EXR, ran the denoiser, and converted the files back to PNG. And it all looked rather promising!</div><div><br /></div><div>Then, to my astonishment, I managed to build a C DLL wrapping the functionality, and integrate that to my game. The performance sucked due to expensive buffer copies via CPU, but I could see denoised stuff, and it all looked correct! That was most excellent. Then I spent a few days optimizing the buffer copies, eventually managing to share the Vulkan's buffer directly with CUDA's thanks to VK_KHR_external_memory_win32 and cuImportExternalMemory. I still have one more smaller buffer to skip the copy on, but on a 1200x1000 R32G32B32(A32 unused) buffer the denoising process now takes about 4 - 5 ms on my RTX 3080 Ti. That's still a bit slow, but perfectly usable!</div><div><br /></div><div>Unfortunately the high frame rates allowed me to see that even in Optix's temporal mode there's considerable variance between frames because I just don't have enough <i>good</i> samples per pixel. Only at about 64 spp is where things even <i>begin</i> to look passable. 1 - 2 spp is the generally agreed upon good target...</div><div><br /></div><div>I had hoped that I wouldn't need to implement importance sampling in a game this simple, but seems that I was wrong. I was victorious with Vulkan and the basics of RT, but I fear that is a battle I might not be able to win. It was nice knowing you all.</div><div><br /></div><div>Next I'll be researching ReSTIR, DDGI techniques and the like. Just wanted to write this advance-eulogy first.</div><div><br /></div><div style="text-align: center;">* * *</div><div><br /></div><div>But perhaps not all is lost. I mentioned having some success with my own denoiser. I should also try the AMD's open source one for comparison. Or perhaps try writing a special version combining that with my own. While it might sound a bit arrogant even trying to write my own which is better than even the state-of-the-art by industry leaders, there's one thing on my side: I haven't yet spoken about this in depth, but I'm not trying to ray trace the whole image; just lighting. Once I have the ray traced per-frame lightmap, I can then render the game objects with a normal rasterizer, and look up the lighting from the smoothed out RT image. For example the rather distored yellow grids in the image at the start of this post are of no concern, as a separate raster step will draw over them.</div><div><br /></div><div>Oh, almost forgot. I've spent extra effort in making the ray tracing happen in a real 2D-space. But if I want to truly simulate how light would behave, I must do it in 3D, and have all the game objects have a 3D representation: in real life if a light beam hits a floor, some photons bounce to the ceiling and the walls, and then back to another point on the floor. That isn't currently happening, leading to light beams that don't illuminate rooms the way they should. It might be possible to do some post processing to simulate it, but I'm not too hopeful. "Fun" times ahead.</div><div><br /></div><div>Anyway. Quite an update. I really hope to be able to return victorious some day.</div>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-1192780340615798432022-04-01T20:25:00.011+03:002022-04-03T18:42:24.275+03:00Adventures in making a .NET IoT timer with Meadow<p>Despite the
hardships I described in the previous post I’ve managed to produce something.
Something rather cool! While I’d like to present it in video form, I’m just not
feeling up for it. Blog texts are more my medium, anyway. Or maybe it’s that I
have more than ten years of experience with this; can’t say the same about video :d</p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Anyway. A
brief summary before embarking on this adventure: I made an internet connected
timer display! On a microcontroller! With little previous experience. With
.NET! Started with a dirty “MVP”, and then focused on improving the stability
until I was satisfied. Next step would be to add more features on this rather
solid foundation and improve the form-factor. See a <a href="https://www.youtube.com/watch?v=DM48zLoU1q4">short clip</a>
about it! There's also a clip about <a href="https://www.youtube.com/watch?v=MKNM8CQRi9w">an early version</a>.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">This is a relatively
small project, but I have a lot to tell! Writing this all must have taken at least
a dozen hours or so. Maybe even more :o<o:p></o:p></span></p>
<h2 style="text-align: left;"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Background</span></h2>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">To prepare
for this adventure, let’s first take a step back and start from the beginning with
some background, as usual. Feel free to skip to the next section at your
leisure, or even the one following it.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Eons ago I
implemented a JavaScript-based countdown timer. It started as a pizza timer and
initially saw most use at small LAN parties, and due to the convenience I’ve
also used it for other foods too :p But it could have been even more convenient!
So, I added a shortcut for it to my Runner, which is a Win-R replacement with
strong customization. After this I could launch it just by pressing Win-Q and
typing <u>cd 12</u>, and this would open the timer page in browser and set it to count
down from 12 minutes with the query string. How easy is that!<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">But
sometimes there’s a need to also count to a specific point in time. So, I added
a command for it. <u>at 12:10</u> would open the timer, and set the remaining time in
such a way that it would trigger at that given time. This opened up a lot more
possibilities for use, and it often was the case that I had several timers
running at once. And more than once some of those timers were set for longer
times, and I happened to restart my PC during them. Let’s just say that a dumb
browser-based countdown isn’t quite compatible with that concept. Not to
mention the times when Chrome updates prevented the timer for accessing audio
due to low user engagement. Thank god there was a group policy to fix that. It
was also a bit of a bother to either keep the tab active, or constantly take a
peek to see how much time there was remaining. I could have moved the timer to
a second monitor, but it would have required extra effort. If I even had a
second monitor, that is. A single ultrawide is more of my thing.<o:p></o:p></span></p>
<h2 style="text-align: left;"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">A plan forms</span></h2>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">I needed
something better. Something which provided that durability and reliability, with
the “ergonomics” being a still-important secondary feature. And with the alarms tied to specific instants in time instead of arbitrary durations. I considered my
options, and saw two clear options. Either a Windows application with automatic
start and a screen overlay, or something that I could run on a smaller embedded
device like the Raspberry Pi. This second option would then need some kind of
API to interface with it from the desktop and extra hardware to enable sound
and the display.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">The second
option was superior in the sense that it would function independently of the
main desktop, and the platform would likely have fewer interruptions due to
reboots etc. But that extra hardware was quite an issue. But in this I also saw
a third option. A true embedded device. Something where I wouldn’t even have to
concern myself with OS level stuff like getting my app to start automatically
and staying running. And I was already in possession of suitable hardware, and now
software, too.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">This third
option is of course the Meadow F7 device from Wilderness Labs, which I owned
two (now four). Some time ago it had received an update which enabled the
built-in Wi-Fi hardware allowing it to be easily connected, and I already had the
other extra hardware that was required from the accompanying Founder’s edition “Hack
Kit” and an Adafruit order, namely the displays and a buzzer. <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">The
microcontroller form-factor also offers advantages with size and (perceived)
reliability, with restarts happening quicky. At least once there’s AOT… And
most importantly, I wanted to do things on a microcontroller. Maybe I could
event have the device battery powered some day? <o:p></o:p></span>So that’s what I set out to build with.</p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">And boy,
did I build a fine thing in the end ^^<o:p></o:p></span></p>
<h2 style="text-align: left;"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Concept validation</span></h2>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Few years
back I had already played around a bit with the Meadow, and built for example <a href="https://www.youtube.com/watch?v=i8RdKlmA2IU">a code breaking game</a>
with almost the same exact hardware, so I had a pretty good idea how to
approach this particular problem. The functionality itself was rather simple,
both logic- and hardware-wise. Initially.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Another goal
I had was that the device itself wouldn’t have any human interface. Everything
would still be driven through the Runner for superior usability, and as such
the device would have to be connected to the control server either via IP or a
serial connection. I already had some experience with the serial connections,
but IP is always so much cooler, and also more standalone in this case.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">And as a
bonus, I wanted to have the ability to have alarms displayed on multiple
devices at once. That way I could have one on my desk, another in kitchen etc.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">So, I
started by proving that the device would indeed be able to communicate over IP
as advertised, and that it would also retain that ability over longer periods.
I took the provided <a href="https://github.com/WildernessLabs/Meadow.Core.Samples/blob/main/Source/Meadow.Core.Samples/Network/WiFi_Basics/MeadowApp.cs">Wi-Fi
sample code</a> as my starting point and got to work.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">I was quite
pleased that the sample worked unchanged (not counting network configs). Could
it really be this simple? Encouraged by my success, I moved on to the next
phase, and quickly implemented a relatively simple .NET 6 based backend for all
my timing needs, accompanied with a HTTP wrapper for making requests on the
backend.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">But I still
had scepticism regarding the networking in Meadow. On the other hand, I’ve often
struggled with trying to build things that are too perfect too soon, so this
time I settled for the “bare” minimum and just rolled with it. I didn’t bother
to implement persistence, opting to just store things in memory. This wasn’t a
lot better than just having the things in a browser, but at least it was
magnitudes better than keeping the stuff only inside Meadow’s memory. I could
always add the persistence layer when the more uncertain things were less
uncertain.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Unfortunately,
my initial scepticism was confirmed. When I got even slightly more serious
about my use of the network, the device just hung. Sometimes this happened
within a minute of boot, and sometimes took more than an hour. That wasn’t
great. Not great at all. But hey, the device is still in beta, and the people
working on the device assured me that stability improvements were actively
being worked on. And they were later fixed!<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">As
explained, getting the thing to work wasn’t the end goal. Getting them to work
reliably was. On any other week I would have probably been quite devastated when
something this elementary wasn’t working as expected. But now I embraced the
challenge presented. </span><span lang="EN-US" style="mso-ansi-language: EN-US;">I even
had a secret new tool at my disposal now. One I had been itching to apply
somewhere.</span><span lang="EN-GB" style="mso-ansi-language: EN-GB;"><o:p></o:p></span></p>
<h2 style="text-align: left;"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Implementation</span></h2>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">A while ago
the hardware watchdog in the device was exposed for use. While not exactly
graceful, it was perfectly effective in getting the device to recover. Challenge
overcome. How unexpectedly anticlimactic. Additionally, a later firmware update <i>greatly</i> improved network stability.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Now I had
more time to focus on the application domain, then. I needed two things. First,
I’d obviously have to get the timers to the device. And second, a closely
related mandatory reliability feature is getting the device to recover those
timers on booting. Something that would happen quite often for the foreseeable
future.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Luckily this
was something I had anticipated with the initial architecture, and almost the
sole reason the server component even exists. While I didn’t set out to build
perfect right away, it had to be better than just persisting the timers in the
device’s RAM. Especially this early in development I assessed that the server would
have a lot less restarts than the device, and it wouldn’t make sense in trying
to persist anything important on the device.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Or at least
I didn’t know enough about embedded hardware to know how reliable it is. All I
know is that SSDs on PCs are quite reliable. And a magnitude more reliable yet
if the server is clustered over many physical computers and the writes go to
multiple independent storage devices. But that’s another adventure altogether, best
embarked some other year.<o:p></o:p></span></p>
<h3 style="text-align: left;"><span lang="EN-US" style="mso-ansi-language: EN-US;">MVP</span></h3>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Let’s talk
about the APIs first. Respecting the pledge I made to myself earlier, I started
by building something less desirable, but what would work with minimal work.
And what’s easier than polling over HTTP?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">No state-keeping,
no events, just a dumb endpoint that returned the next countdown timer. And as I
wasn’t familiar with the characteristics of the device’s clock and its
accuracy, I made the endpoint also return the current time. The device could
then compute the exact current time by diffing to a stopwatch which was reset
when the endpoint polled.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">And there I
had it! Thanks to the code breaker project, it didn’t take long to have the
remaining time visible on a 7-segment display, and the end of the timer
visualized by flashing a Charlieplex led matrix. <a href="https://www.youtube.com/watch?v=MKNM8CQRi9w%27">A fully functional MVP</a>
already.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Ending up with
a <b>viable</b> end result is rather unheard of, as far as my things go. As
hinted, usually my projects are very long and I aim for the perfect. Sure, I’ve
learned <i>a lot</i> doing it, but only rarely managed to produce an artifact,
and of any real use. And that I had something again, it felt really good! I had
really missed the feeling.<o:p></o:p></span></p>
<h2 style="text-align: left;"><span lang="EN-US" style="mso-ansi-language: EN-US;">Further improvements</span></h2>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Now that I
had a minimum viable product, I could have just ended it there. But it’s just
not in my nature. I had already put a week of work into it. What if I put in
another? I could do so many nice incremental improvements, and all the time
have a working thing. Even if I quit, I’m left with something worthwhile. Plus,
I was feeling good. I really wanted to keep working on it. Even if my fascination
got a bit unhealthy towards the end of the first week. Surprised myself by
taking a short break, and was again energized.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">And there’s
a lot I ended up improving. I’m not sure how I should best present everything,
so here comes <i>something</i>. It doesn’t have to be perfect, right?<o:p></o:p></span></p>
<h3 style="text-align: left;"><span lang="EN-US" style="mso-ansi-language: EN-US;">Networking</span></h3>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">The first
obvious improvement was how the device interfaced with the server. If you
recall, the first implementation was simple HTTP polling. Polling has high
latency, and this was something that needed instant feedback in order to feel
reliable. If I set a timer, I want to immediately see that it got set and move
on to do more important things.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">I could
have upgraded to long polling and call it a day, but publish/subscribe is a lot
cooler. Plus, it’s more efficient and scales better, not that it was an actual
concern. While I’ve tried to make NATS my go-to in this regard, I decided to go
with another of my favorites: Redis. It’s a mature codebase, and the wire
protocol is dead simple, so it’s going to perform extremely well for my scenario.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Except it
didn’t. I tried using the de-facto StackExce.Redis package, but it turned out
to have too many features. Meadow executes code in an interpreted mode with
some rather primitive JIT, and all those features with a complex handshake
meant that the initial connection took a <i>long</i> time, enough to blow past
about every conceivable timeout. Even five minutes wasn’t enough to complete
the whole handshake. That was just too much.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">I could
have tried NATS yet, but decided to play it safe and go for the nearly polar
opposite. And have a chance at doing something I had been missing a long time.
Pure UDP. Minimal framing. Dead-simple connectionless protocol with timers.
Handcrafted packets. Oh, how I had missed that world; it has been so long since
I had worked with Tracker.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">And third
time’s the charm. Performance was awesome and things just worked. And would
they happened to not have worked, timers would soon rectify the situation. I was
happy.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">There are
just two packet types. The device sends discovery packets at an interval to the
server, and the server sends status packets at an interval to all devices which
have been discovered. And if there’s a new alarm, the status packet is sent
immediately, allowing the device to pick up the countdown without delay. Sure,
it was excessively chatty when there were no updates, but it was also
excessively simple and reliable.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">The
discovery packet is just a simple sequence of “magic” bytes, and that’s it. The
status packet is more sophisticated. Mirroring how the HTTP polling endpoint worked,
it contains a sequence of the next few upcoming countdowns which the device
hasn’t yet finished. Additionally, the packet starts with a hash of the data it
represents. The data doesn’t change until the old alarm passes, or a new one
gets inserted before it. This means that the client can simply check those
initial bytes of the packet for the hash, and stop parsing if it equals to the
old hash. Only if the hash differs is it required to continue parsing and
possibly allocating memory. So fast! Other than that, there’s really nothing
extra. Not even a header or a real checksum to differentiate the status packets
from garbage :s There probably should be…<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">But even
with an implementation of this level things work really well.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Part of the
equation is that the device tells the server that it has received a countdown, or
that it has started/finished alerting it. This still happens via HTTP. While it
would be nice that there was only one communication channel, one could also ask
why? Right tool for the job, and it already worked well. And the device is plenty
powerful to contain code for both. Everything doesn’t have to be absolutely perfect.
That’s what I actively try to tell myself, and I’m slowly starting to perhaps
even believe it.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">There’s
also a mechanism for keeping the device’s time closely matching the server’s. Initially,
I thought I’d implement NTP. But I don’t really understand it, and I could not find
a good implementation I could run on the device. So, I rolled my own (:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">When the
device boots, it simply does a HTTP call and uses that time. And afterwards
every 15 minutes it asks for the time again. In case the call takes less than a
threshold, the device’s time is updated, after tweaking the time value by half
of the request latency. Because why not. It’s likely mostly symmetric on a LAN,
right?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">It works
really well. If I wanted to improve this, I would eliminate the explicit updates
completely, or at least implement them in UDP, so that there’s always only one
round-trip required, improving latency. Not that the current 60-100ms is too
bad. It should be using keep-alives, anyway, so there’s not too many extra
packets. Elimination of these updates could be achieved if the server immediately
replied to the discovery packets with status and time. And perhaps have some
soft nudging so that the device’s time changes by only a few milliseconds at a
time if the difference isn’t too large. That way the remaining time on the
countdown display decrements as expected even under close observation.<o:p></o:p></span></p>
<h3 style="text-align: left;"><span lang="EN-US" style="mso-ansi-language: EN-US;">Local persistence</span></h3>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Now that
the protocol was at a satisfactory level, I could continue to improve other
things related to reliability. Which still happened to be related to
networking, too. As explained, the way the device protocol works is that the server
sends the “next” events to the device. For the server to know what these next events
are, it needs to know if the device has already alerted them: the device needs
to tell the server this. But the network can be unreliable, and I don’t want to
bother the user with duplicated alarms in case where an alarm happens but the server
can’t be reached before the device boots.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">So
obviously the device needs to be able to locally persist these states and then
ignore them if the server disagrees, and resend the state update. But how to
store this data? While Meadow does have onboard flash storage which is accessible
to user code, I’m concerned with write endurance. State updates can happen
relatively often, so it might wear out the device at a surprising rate.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">But there’s
alternatives! I’ve been fascinated by write endurance before, and happened to
stumble upon a type of memory which is persistent without power, yet having a
superior write endurance compared to flash, while being relatively affordable and
usable in embedded devices. As a part of Adafruit order I got a couple FRAM (Ferroelectric
RAM) modules for unrelated purposes. These particular modules have write
endurance of about 10e12 per byte. While still finite, it’s practically infinite
in this application. How cool is that!<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">There was
no ready-made library for using the modules with Meadow, so I ended up writing
my own based on Adafruit’s Arduino code. Things went quite smoothly – after I
learned that the chip select pin can’t be released between sending a command
and reading the result. Oh, and there was also another thing. This particular
device requires sending a separate write enable command before the actual write
command. Adafruit’s library insists that the write enable command needs to be
sent once, and then multiple writes can be issued. But reading the datasheet
the write enable latch is reset each time chip select is released, and a new
write command can’t be issued without releasing the pin. Was a bit frustrating
to figure that out. Or at least I couldn’t figure out how it was supposed to
happen. This was my first time interfacing with an SPI device, after all.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Now that I
had the storage device, I could get to writing things to it. I ended up with something
relatively straightforward. The persisted data consists of “packets” of
constant size, and each having a static header, countdown GUID, the latest
status enum value, a serial and then a hash. Each time a new update is written,
it’s written after the one before it. This way the memory gets worn relatively
evenly, not that the write endurance really was a problem. But why not.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">For the
hash I wanted to use just a simple CRC32, but as it happens, .NET standard 2.1
doesn’t have an implementation for it, and I didn’t want an extra library just
for it. But what I have is MD5. And as a bonus, it is hardware accelerated,
too! As the full hash is rather excessive, I simply XOR it to shorten it.<o:p></o:p></span></p>
<p class="codeblock"><span lang="EN-US" style="mso-ansi-language: EN-US;">Span<byte>
hash = stackalloc byte[16];<br />
// MD5.Create().TryComputeHash(…, hash, …)<br />
var ints = MemoryMarshal.Cast<byte, int>(hash);<br />
var smallHash = ints[0] ^ ints[1] ^ ints[2] ^ ints[3];<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Beautiful,
isn’t it.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Now with
the states persisted, the device can read through the memory and try parsing a
packet from each packet-sized offset. If the header and the hash match, it is
assumed as valid persisted state. If multiple updates are found for a single
event, the “newest” one is selected first based on the serial (a version
number) and the state. Afterwards, all those states are bulk sent to the server
during the startup sequence, which now once again sends relevant status updates
which the device doesn’t have to ignore. This also saves a bit of network bandwidth.<o:p></o:p></span></p>
<h3 style="text-align: left;"><span lang="EN-US" style="mso-ansi-language: EN-US;">Watchdog</span></h3>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Continuing
with the reliability improvements, my next focus was improving the watchdog.
The initial implementation guarded well against complete device hangs, but wasn’t
not much more sophisticated than that. As the application now consisted of the
time updates, the discovery stuff, the actual timing code and lastly an asynchronous
bonus layer for the state updates (more about it later), it made sense to start
monitoring all of them. But there’s only a single watchdog. How to watch for so
many different things?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">What I
ended up was a collection of timestamps which record when each of those components
was last healthy (=reached a checkpoint), and then a task to periodically compare
those timestamps against specific timeout values. If any of the timers is deemed
unhealthy, and error is printed and the watchdog is not reset. This leads to the
device restarting, and usually things start to work again. As a bonus, as the
timeouts are computed in “user-space”, they can be a lot longer than the default
short.MaxValue milliseconds the Meadow’s watchdog makes possible. Mostly useful
for the time updater.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">I’ve
spotted the device to restart a couple of times due to above, but I don’t have
the specifics on why. There’s some kind of traceback visible on the small debug
display I have attached to it, but it’s too small to display it in full. I’m
considering on trying to write the tracebacks to the flash memory as they
happen, and then sending them to the server after reboot, or on background. Or
maybe order a lot larger display just to display the longer tracebacks :D<o:p></o:p></span></p>
<h3 style="text-align: left;"><span lang="EN-US" style="mso-ansi-language: EN-US;">Usability</span></h3>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">As I built
the timer on top of the hardware I had in the code breaker, I already had a
7-segment display and a bright “lamp”. The 7-segment display was obviously for showing
the time remaining, and blinking the lamp for alarming. In this case the lamp
is an addressable Charlieplex led matrix display (had to make a driver for it
myself, again). It’s a total overkill, as I’m just filling all the pixels with
a single brightness value. But it’s easy. And really bright.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">But what’s
an alarm without auditory output, too. In the Hack Kit there also was a piezo
speaker which was perfect for alarms when attached to a PWM port. I immediately
hated how it sounded. It was perfect.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">But I could
do better.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">I added a
small beep when a new alarm was detected so that there was more feedback for
entering one. A tiny thing, but a really nice one.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">I figured
out I could also improve the 7-segment display. This is probably a bit controversial,
but this is my thing, and I can do stuff just the way I want :) The purpose the
display it is to show the remaining time, but only when I’m interested in it. I
found it a bit obnoxious that the seconds kept updating every second even when
they didn’t really have any relevance. So, I made it so that the display only
shows the seconds if there’s less than 10 minutes left. If there’s more, only
the minutes are shown, with the digits reserved for seconds remaining
completely dark.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">The seconds
remaining dark actually serves a dual purpose. The way 7-segment displays are
typically driven is via a matrix, with only a single led receiving power at a
time. This leads to flickering, which is especially apparent at lower brightness
settings. The less illuminated areas there are on the display, the less there’s
surface area for the flickering to manifest at. <span style="mso-spacerun: yes;"> </span>During use I found that I’m highly sensitive
for the flickering. The display is a small object, and if I moved my head around
it felt as if the display was moving. That was not a nice feeling. The less
flickering, the better.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Also, as
the display is for indication and not illumination, I had used it at the lowest
possible brightness setting. This helps to reduce visual fatigue when the
display remains in my field of view. But as I mentioned, this was at odds with
the flickering. So, as a workaround I bumped up to the highest brightness and covered
the display with a dimming film. Not very elegant or flexible, but it felt like
it helped, and my camera seemed to agree. It’s still nowhere near perfect, but
it’s usable at least.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">As there
doesn’t really seem to be any 7-segment display which don’t flicker, my other
options seem to be making my own (unfeasible), or using another display type.
OLEDs would be great, but they might end up with burn-in. Not sure if that’s
really a problem. There’s also TFTs, but I’m not sure how readable they are
with their reduced brightness. I do have one TFT display (the debug one), but
haven’t yet tested to render the timer on it.<o:p></o:p></span></p><div style="text-align: left;"><h3 style="text-align: left;"><span lang="EN-US">Accuracy</span></h3>
<p class="MsoNormal"><span lang="EN-US">And lastly,
I focused on accuracy. As I hinted, the system supports alarms on
multiple devices at once. I wanted to make sure that different devices would
display the same time, and start alerting at the exact same time.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">I already
had the time updates with latency compensation, so most of the work was already
done. What was left was to make sure that the time calculation logic was
accurate, and that the code which was executed when an alarm started executes
in roughly the same time on different devices. The biggest hurdle was the
status updates. On a desktop it happened practically instantly, but the Meadow
on Wi-Fi took considerably longer to execute the update.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">I solved
this by making the status updates asynchronous. The update is written to FRAM
instantly, but afterwards it goes to a background queue and takes however long
it takes. With automatic retries.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">After these
the alarms trigger about as closely as possible, even on drastically different
hardware :) See the video in the intro.<o:p></o:p></span></p>
<h2 style="text-align: left;"><span lang="EN-US">About the development
experience</span></h2>
<p class="MsoNormal"><span lang="EN-US">Before (finally)
concluding, I’d like to talk briefly about the developer experience. I’ve grown
to be a big .NET fan, and I was ecstatic that I had the ability to stay on the
platform even when targeting a microcontroller. Likely wouldn’t have targeted one
if that wasn’t the case. At least not without a prototype in .NET.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">And what’s even
better is that Meadow has support for the full <u>.netstandard2.1</u> profile,
so not just some exotic device-specific framework. While I’d love to have the
full .NET 5+ support I’ve heard fables of, that profile has <i>mostly</i> all
the features I need. What this enables, is the ability to write .NET library
code as usual, and have that work on the device without modifications.
Including networking and async/await. The only thing I needed to add extra
support for was the application-specific hardware, like the displays and the
FRAM chip, but that was handled via a “device interface” with just a few
methods.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">All this meant
that I could write the application logic in a reusable library, and then host
that application on different targets with minimal code. In this case one
target was obviously the Meadow, and another was LinqPad for running the code
on PC. This also meant that for testing most changes I didn’t even have to
deploy the code to the device (a task which takes a few minutes when also
counting the startup time), and could instead locally test them on a desktop PC,
taking only a few seconds to get results (including the time it took to compile
the application). After testing I could finally deploy the app on the device,
and it just worked. It was glorious.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Of course,
testing more device-centric things wasn’t this easy, but there were only few of
those things.<o:p></o:p></span></p></div>
<h2 style="text-align: left;"><span lang="EN-US" style="mso-ansi-language: EN-US;">What’s next?</span></h2>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">After all those
improvements the device is at very good place already! The core is now very
stable and I feel really confident that I will be able to rely on the device
side of things.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">What’s
still missing is the server-side improvements. Things still are not persisted to
a database on that end, and I’m running the server of the desktop, so as a
whole the stability isn’t that much better than it used to be. But after I improve
that aspect, things are really well all around!<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">The next
real improvement is probably either the form-factor, or features. The device is
built on a solderless breadboard with a lot of jumper wires. It takes quite a
bit more space than it could. I have plans in moving the FRAM chip to a backpack,
and replace the led matrix with just a led or two. The piezo should be good
enough to get my attention. It’s a bit sad if I have to keep the dimming film
on the display. I could have used the lower brightness in normal use, and then
blink it at full power when the alarm ends, subverting the need to have separate
LEDs.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">Anyway.
After this I won’t need the breadboard, and the whole thing then fits on a three-layer
feather form factor, taking considerably less space on my desk, enabling better
positioning. I’ll make a post about it when/if I get around to implementing it.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">I’m also
flirting with the idea of introducing strong cryptography, especially on the
UDP layer as it’s stateless. While it won’t help with confidentiality
considering the timing aspect of the system, it will greatly help with
integrity and authenticity. It’s like an “easy” solution for ignoring garbage
packets. If a packet doesn’t pass crypto, it can be ignored. And if it does, it’s
probably valid application data! HTTP on the other hand is stateful and has a “strict”
structure, so there’s not a realistic chance for garbage.<o:p></o:p></span></p><p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">And maybe some new features, too. Like customizing the alerts, by allowing some countdowns to be silent, or with a different (=less annoying) tone.</span></p><p class="MsoNormal"><span lang="EN-US" style="mso-ansi-language: EN-US;">And as the things happen over a rather simple API, I can futher customize the functionalities by writing orchestration code on another system. For example the <u>at</u> command in Runner is implemented by having it perform calculations, and then adding a countdown to a specific time. I'm also going to write a new command which cancels an existing alarm of the same type before starting a new one with the given time. Couldn't do that with the old browser-based approach, but now I can :)</span></p>I really like how this new system turned out. And while this might perhaps not sound like a lot, it really made a difference. After using this new thing for just a few days, I felt really handicapped when I had to use the old alarms for a while. The difference in usability was really astonishing!Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-89109345284772052802022-03-20T14:28:00.001+02:002022-03-20T14:28:53.969+02:00VLOG season 2, sad progress report<p><b><span lang="EN-GB">As it
turns out</span></b><span lang="EN-GB">, I really
did do it <i>all</i> over again. Including the part where I don’t really
publish anything even though I really wanted to.</span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">So far, I’ve
scripted, filmed, edited and uploaded three whole episodes for the <a href="https://dea.fi/vlog.s02">second season</a> of the VLOG. The first episode is about the whys of the season, second more
about general background about me, with the third episode finally talking more
about the game project this season is to be about.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">I’ve been
quite satisfied with where the videos have been <i>heading</i>, but that’s also
the problem. They are not <i>there,</i> yet. They are mostly just fluff,
without proper content that would be useful in a broader sense.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">What I
would have liked to show is me building cool things, while making <i>insightful</i>
commentary. To make matters worse, I <i>have</i> built cool things and thought a lot of insightful
thoughts. But I’ve done it off-camera.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">First: I
don’t know how to <i>efficiently</i> present things afterwards. All those
previous episodes required writing a script – a task which takes a considerable
amount of time to reach the quality I’m satisfied with, and then some extra
time to edit it together. I am rather awkward when I try to present things
unprepared, plus it makes the editing process much more tedious.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Second: I
haven’t been able to find the right mind for doing things live. Most of the
time I struggle to find the energy to focus and do things. I’d rather just code
in short bursts and then recharge. Or at least have the ability to do so. Sure,
I may have pulled “all-dayers” all through last week coding IoT things. But if I
set up the camera, I feel pressured to produce value, and be energetic in
commenting the stuff I do, all the while also looking presentable. That’s a lot
of extra to ask, and I just haven’t been able to do it. But I’d really like to.
I even tried to make the live coding editing easier by introducing a recording pedal
(tested with my recent Minecraft videos), but it’s of no use when I can’t even
start due to the reasons above. (Plus, the project is so very overwhelmingly
ambitious…)<o:p></o:p></span></p>
<p class="MsoNormal"><b><span lang="EN-GB" style="mso-ansi-language: EN-GB;">So now
instead of coding and providing quality content, I don’t produce anything, and
feel really bad when coding things off-camera. It was supposed to be the polar opposite!</span></b><span lang="EN-GB" style="mso-ansi-language: EN-GB;"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">I’m sad and
depressed, and I don’t know how to proceed.<o:p></o:p></span></p>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-74220101634175758262021-10-18T16:02:00.001+03:002021-10-18T16:03:46.031+03:00Blog 15th year anniversary<p><b>Wow.</b> Can't believe it's been <i>15 years</i> already for this blog. This year additionally just happens to be a 10 year anniversary for my very own domain, too! I'm kinda forced to celebrate a little. Maybe even get a bit sentimental?</p><p>Anyway :3 How this blog started out does feel a bit cringey. I was quite young then. Also, I feel like blogging was kinda only becoming a thing then, and there were no social media in the larger sense. As such there weren't really any expectations on what a blog should or could be. At least I didn't. I mean, this blog was something that wasn't really even meant to become a thing. It was actually just kind of a fad I tried; something to perhaps practice voicing my thoughts on. How to somehow feel connected on something, just by writing things.</p><p>And I guess I really like writing things (not too often of course, as evident by the post history :p). But seeing how this blog has stood for 15 years - it's so long that I can't really even comprehend it all at once. So much has happened, yet remained unchanged, even. But I guess I feel pride, at least.</p><p>And that - barely - bridges us to the sentimental part. Or whatever this is. So what are all those things that have happened, then?</p><p style="text-align: center;">* * *</p><p>As mentioned in the intro, at the beginning the blog was an experiment. And there had to be something to experiment, platform wise. What's the point otherwise? :p So I started out by trialing a couple of self-hosted PHP-based blogging solutions, but was quickly turned away from those due to them feeling <i>bloated</i>, or just not the right fit for me personally. So instead I did what I always do: rolled my own. A great learning experience, but ultimately something that I grew bored of maintaining. But I still liked blogging, so to my own surprise I jumped for quite the opposite end of the spectrum, and moved my blog to Blogger. The final act of this transition was <a href="https://blog.dea.fi/2011/11/imported-texts-from-old-blog.html">implementing an Atom feed</a> for my old blog so that I could use Blogger to import the content. While reather unremarkable on itself, that may have been the first time that I really interfaced with another software solution to make it ingest something I made - instead of me just processing what some other piece of produced.</p><p>Blogger was about convenience. I had already learned what little there was to learn about the technical side of blogs, and it was now about just the content itself. And when I learned that I could post to Blogger using Windows Live Writer, blogging became effortless. A new blog post was literally just the matter of opening the application and clicking publish. A surprisingly welcome change when compared to my own solution, which required manual file uploads, or editing files directly on a remote server. Or perhaps I had an ugly web-based editor? But the quality of life was so much better with the new system. While I did have some grief about how much larger the page loads were on Blogger, all the other things won. With a proper theme it didn't look that much more heavier, and the dialup era had already ended.</p><p>And I guess that's all I have to tell about my history with blogging. Let's talk about the (evolution of) content next.</p><p style="text-align: center;">* * *</p><p>And boy, have I talked about a lot of things. And the initial years is something that I'd rather not really even talk about, anymore. It's like cringey Twitter. But how the blog has evolved since then, that we can talk about. Statistics-wise the first post was made a little over 15 years ago, and since then 79 other posts have followed, including this one. For a nice round total of 80 posts.</p><p>But I guess for completeness's sake I do have to address all the content. Like I mentioned above, the blog started out as something to allow me to have my own voice. At the time I wasn't (and still isn't) very social, so writing was an exciting opportunity to comment on things I had interest on: my beginner programmer stuff and other experiments with computers. Especially programming stuff, since I had very limited opportunities to talk about it otherwise. And I guess I can't deny the fact that having a blog was cool, as not too many people had one! I was hipster before you even knew it was a thing!</p><p>It also didn't take long for me to switch the language from Finnish to English. Because if I bothered to write about things, why not write about things in a language that maximized the potential audience with minimal cost?</p><p>After a time I also began experimenting with voicing some of my other experiences; how I was doing in the physical world, and after quite a bit of hesitation even about <a href="https://blog.dea.fi/2011/04/simoista-asiaa.html">taste-testing meads</a>. But talking about myself always felt strange, still. It was a lot easier to just talk about concrete stuff, and preferably in a way which could perhaps benefit the random reader. Value! Not that many of the posts really were that way, but that was the idea.</p><p>And value is actually maybe the most important talking point here. The blog was (and still is) my very own corner of the world, and with the only rules being the ones I make (or break :o) myself. Now, writing this I realized that the blog is a lot more me representable of me than I knew.</p><p>The most controlling aspect is the stride for just not half-assing things, but doing them well. The early tweet-like posts are especially hilighted here, because they were low in effort, and with little thought put into them: just stating a thing. Contrast this to later posts where I not only state things, but also the thoughts behind them. And better yet, explaining the things in such a way that the reader is able to hopefully learn something tangible. For example the times I talked about <a href="https://blog.dea.fi/2014/08/generating-procedural-planetasteroid.html">procedural asteroid generation</a> or <a href="https://blog.dea.fi/2015/08/webrtc-primer.html">WebRTC</a>. Or even the post venting about the <a href="https://blog.dea.fi/2019/12/at-least-alerting-works.html">instability of InfluxDB</a> has a real tangible command to rebuild the index in a non-standard, yet common enough case.</p><p>Or to put that in other words, to create value. Why would anyone want to read this blog if it was just me talking about myself, on a surface level? When I could feel like an VIP and be talking about things that could benefit people. I've never even tried to chase readership numbers, but it does feel awfully nice to see that some posts have had up to 500 views. For some strange reason.</p><p>But like in real life, there are the extremely rare cases where I realize that the only one stopping me is myself. Times when I get to post about tasting those meads, or how a video game <a href="https://blog.dea.fi/2015/10/on-life-is-strange.html">hit me really hard</a>. Though even all those posts are still subject to my strict requirements of avoiding the "air-headed" beginnings and having some real thought behind them. Especially those posts, it seems. And this one, to a point.</p><p style="text-align: center;">* * *</p><p style="text-align: left;">Did I mean to talk about summarizing past post? I'll keep it short, then. It's high time for this post to start giving out some value :d</p><p></p><ul style="text-align: left;"><li>2006: Short posts about developing a primitive blogging system with PHP. First comments around Linux experimentation. First post about my game project, USG.</li><li>2007: Mostly a continuation of the previous year. A one-off comment about tech news.</li><li>2008: More short commentaries about dev stuff. Not many posts.</li><li>2009: Like the previous year. A small side step about game consoles.</li><li>2010: More dev stuff, first non-tech post. At this point the posts start to get more thought put into them.</li><li>2011: Conscription makes me ponder my life's choices. And perhaps ending it. A rather special year, with most posts about non-tech stuff.</li><li>2012: A rather busy year; the level of thought reaching a "steady-state" :d Guild Wars 2 is released.</li><li>2013: USG, refreshed.</li><li>2014: Value.</li><li>2015: A lot more value. Life is Strange happens.</li><li>2016: The special interest of web development returns,</li><li>2017: A rather busy year again, it seems; focus split to the first season of the vlog. Only a single post, but summing up the whole year.</li><li>2018: A rather busy year, once again. Again a single almost panicked post before the year is over.</li><li>2019: Hey look, we're back to producing value! With an asterisk. The new normal. Also, I finally graduated.</li><li>2020: The new normal continues.</li><li>2021: And continues; focus is split to vlog's second season. Return to gamadev.</li><li>2022: The year some scary life-changing stuff is likely to start happening. There's a good chance I'll blog about it, you know.</li></ul><p></p><p>Talk about value! There's surpsisingly lot of it. And lots of other stuff too, indeed. See you again in 10 years I guess, for the 25th aniversary. A time that seem more distant than ever before.</p><div><br /></div>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-41657086119119744282021-08-30T17:26:00.000+03:002021-08-30T17:26:18.983+03:00VLOG season 2<p><b>OMG. I did
it!</b> …again?</p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">I’m not
sure if I’ve mentioned it here before, but I have <a href="https://dea.fi/vlog.s01">a VLOG</a> <i>in finnish</i>. A few years ago, I
shot and edited eight episodes of me talking about what I’ve been doing, or
what I’ve been cooking. I also made one special episode about some low-level
serverless technology alternatives with C# and dynamic code compilation and execution,
including syntax tree editing. I didn’t dare to publish any of these, but they
exist.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Now I’ve
began the second season, with more focus on technology. Perhaps gamedev. And
this time it might just be good enough for public release! The format itself is
still subject to evolution, but the two episodes I’ve so far completed serve as
an introduction to the series and the reasons behind its existence.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Though,
truth be told the first episode isn’t that good, and made me hesitate on the
whole thing. Ultimately, I decided that I’d shoot an episode or two more, and
if they are good enough, they could perhaps redeem the farce that is the first
episode. I think it would be quite bad if the only episode available was the
first one, and it was unbearable to watch. But if there’d also be some better
episodes immediately available, the viewer could <i>perhaps</i> skip the first one and, and decide to like the series based on those later episodes :3<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">I’m now at
the point where I have one better episode ready. Although even that starts a
bit weary. But it gets better! Content-wise I’m still debating. So far, the
series has been only about me, and isn’t really useful for anyone – unless
they just want to get to know me better. That would be perfectly fine if I had
a fanbase, but the case is completely the opposite, so I’m not sure why I’m
bothering with this. <i>But</i>, as said, at least the episodes still have the
purpose of laying the foundations for the episodes to follow, should someone
want to invest (more of) their time in all this right now, or at a later date.</span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">I’m also still
not sure of the best way to present the auxiliary information about each episode.
Not that anyone would really care. First of all, I have a short description of
each video in YouTube’s video description field. That is fine. But I also have
some ‘technical’ notes about the video there just in case; perhaps to deter some
obvious commenters. Much of these notes are also duplicated on my own website,
but not all. And vice-versa the site contains some notes not on the video
description. I’d like to unify these somehow. I’d like to have as much
information on my own site as possible, yet I feel like there should also be
some on the video description for those obvious cases. But maintaining these two
in sync is a pain, and they have each have their own purposes :( So what do?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">But, anyway. Here’s the page for the <a href="https://dea.fi/vlog.s02">new season</a>. It goes a bit more in depth into the production of individual episodes. There's also the near-complete script available for each video if you just want a quick overview of the stuff. Then there's also those video links. Videos themselves are still unlisted, but the links are there D:<o:p></o:p></span></p>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-60626195888920520612021-08-12T11:53:00.009+03:002021-08-12T12:13:04.521+03:00About pride and accomplishment in optional multiplayer games<p><i><span lang="EN-GB">(This is
effectively a rant about how </span></i><span lang="EN-GB">I<i> am incompatible with MMORPGs)</i></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">As Destiny
2 has been feeling very stale for a long time, I’ve shifted my gaze to other
games. There was a rather long burst of Borderlands 3, and then a bit of Roboquest,
and a longer phase on Gunfire Reborn. And all the time I’ve had a tiny longing
towards Guild Wars 2. A longing that has been growing in such a way that now I can’t
wait to play it. I’m also very happy that they just announced a lot of details
about an upcoming expansion, including the release date. What a coincidence. Although
the release is about six months away still; plenty of time to get bored, and I
kinda already am. Allow me to explain:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Destiny 2, Guild
Wars 2 and Borderlands 3 all have a mountain on content in them. And they are
great games, with great gameplay. Sounds great, right? That a lot of content I’ve
been really enjoying, taken time to get good at, and/or maxed out on. I’m on
the very peak of (almost) everything. But it is not as simple as this. Things
are (almost) too easy, and there is little challenge left or rewards to earn which I can do on
my own. Which brings us to the following:<o:p></o:p></span></p>
<h3 style="text-align: left;"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">My time is limited.</span></h3>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Outside of
expansions(!), a lot of content in D2 and in GW2 is just replaying old
content. In D2 it is the age-old formula of bounties and the season pass, and in
GW2 the latest one is the quest for a legendary amulet. These offer nothing new
to the game, and just direct playing the old content again and again for some
reward. I’m all for replayability, but these literally offer nothing new, or
change the experience in any way. <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">And actually,
D2 makes things even worse. It’s a loot-shooter. But the bounties require to
use your less good loot. And that’s basically the content. Or well, some
bounties just tell you to do X three times. And then repeat that YYY times. And
if you don’t complete those other bounties while at it, you are basically throwing
away almost all “progress” and ability to better “enjoy” further content.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">And in GW2’s
case, the new questline requires to replay both the story-content and some open-world
aspects of the past several years. While this is a good opportunity for the
player to spot if there’s any foreshadowing in the story, that’s about all the
value there is. No skips for lengthy dialogues, and nothing to change the
experience. Just a mountain of playing it all again. And the fact that I’ve already
played it once doesn’t net me <i>anything</i>.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Then why
play? Like I already mentioned with D2, if that <i>work</i> would be completed,
it would (even greatly) enhance the ability to enjoy the new expansions, and the other repeating content. But in
D2’s case the bounties are so ingrained in the game nowadays that even the
expansions are filled with bounties that punish using the weapons and subclasses
you enjoy.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">And with
GW2 (especially after the very recent legendary armory feature), a legendary
piece of equipment is the literal best-in-slot that replaces everything that
would ever go in that slot. It has the same stats as the otherwise best stats
containing Ascended-rarity items, but it allows for free and unlimited stat swapping. After that you don't need anything else on that slot ever again. In a game like Build Wars 2, that’s the hot shit, and highly desireable. You'd be mad not to pursue that.<o:p></o:p></span></p>
<h3 style="text-align: left;"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">It’s all about the economy and playtime — and psychology</span></h3>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">In a boring
game, wouldn’t it be nice to be able to switch playstyle at will, and for free?
Or in case of looter-shooters, wouldn’t it be nice to be able to sometimes enjoy
our hard-earned loot, and get new loot?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">I’d enjoy
those things, but things just aren’t meant to be. In D2 that means being broke
and longing for new fun and interesting ways to play, just brand-new content out of reach. And as
it just happens, in GW2 that also means being broke and longing for new fun and
interesting ways to play, with brand-new content just out of reach. Even when the games and the reward structures are
completely different. The essence of all this seems to revolve around accessibility,
skill, balance, long-term investment and perceived value, and efficiency. It’s
quite complicated, but I’ll try to render out my own experience in relation to
this:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">In “short”, a lot of the content in these two games is
balanced for good equipment and depending on content, almost no skill. Some
content on the other hand might require near-literal godlike skill and/or <i>a lot</i> of time —
or just a larger amount of less able players.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">In D2 the
open-world sandbox enemies are frail, and they die from about anything. But it’s
also fun to mow down large amounts of red bars, even though there could be even
more of them. But to get new ways of destruction, or <i>any</i> kind of real
challenge, the content to play changes. There’s the adjustable-difficulty 3-player
nightfall strikes or 6-player raids, and also the 3-player dungeons to explore.
Strikes are the only piece of content that has matchmaking, and even that stops
right as the actually challenging difficulties start. All non-matchmaking
content is balanced in such a way that a lone solo player has little chance to really
even begin playing them, let alone finish them (dungeons and lost sectors being
the exception).<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">And the
game makes this exceptionally hard for so-called hardcore-casuals (which I like
to call myself). Every few months a new season begins, and rises an arbitrary “power
cap” on equipment. It also raises the power level required on all content to
match. Effectively undoing any investment towards difficult content. Soloing
content like dungeons or master-tier lost sectors is something the game’s
creators reserve for the sweatiest players – those with time to grind the game
and increase that arbitrary power level to a sufficient value in order to match
the level of the enemies. But I don’t have that kind of time. So even <i>if</i>
I was as skilled as them, I just can’t play the same content as them, as I
haven’t done the ever-elusive the numbers game beforehand.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">With GW2
this changes slightly. Lot of the solo content open-world content does have challenge, but sooner or later it starts to essentially feel like the
infinite variety of oatmeal. Different, but the same. </span>The game tries perhaps combat this by being a theme-park MMO. <i>Every</i> playable area is vastly different than others, and as such the world feels disconnected. But then there’s some
things that can’t be soloed. And <i>everything</i> gets very easy with more players.</p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">In all
these cases, the rewards stay the same. More players, more easy, a lot more
rewards in the same time span. But at least with these rewards it would be
possible to change the way the game is played in order to keep the experience
fresh. Is there really no good way in the middle?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">In D2 I
could play with the equipment I already own and like, but would eventually grow
tired. Or I could try the challenging content, and not really get anywhere. With the most fun weapons gated in that content. In
GW2 I can either keep soloing challenging content and miss out on a lot of
rewards. I could still purchase a limited number of new ascended-tier gear with new stats,
but would eventually go broke. Or I could purchase less-able and a lot cheaper
exotic-tier equipment, but I’d only be making the game intentionally a lot
harder, while also missing out on even more rewards, further limiting my ability
to change things up and stay in a nice position in the game.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">In GW2 the
most <b>long-term cost-conscious choice</b> would be to craft a full set of legendary weapons,
armor and trinkets. Then I could just enjoy playing with what I want. But the
amount of work is <i>legendary</i>. Just to get the gated materials for one armor
weight class (out of 3), it would take an estimated 500-1000 hours of constant gameplay via
WvW over 24 weeks. More if there are gaps on some weeks. Alternatively, via PvP
the gated materials for the in about 280-330 hours over 6-24 months (but still a good number of hours every two months, or else things take a lot longer). Then there’s
also the weapons and trinkets, and the normal materials for all these. And that
is not cheap. But then again, legendaries are the be-all end-all of equipment. Equipment-wise there's nothing left after acquiring them.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">WvW is just
grind when solo, but PvP can be really engaging. But then it, too, eventually turns
to rewards and tryharding, and starts to feel like a chore. Just like everything
else. And if only I had better, more predictable teammates.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Then there’s
the (5 out of 6, already have one) legendary trinket and their quests. I have no estimate on how
long they take; PvE ring and accessory have similarly lengthy quests as the
amulet I spoke of earlier. Second ring and accessory are PvP and WvW only, and take
time comparable to multiple armor pieces. And then the weapons, which are
thankfully mostly just about money, but still have a lot of gated stuff. But the weapons are perhaps the most irrelevant of these, and I already have few of them.<o:p></o:p></span></p>
<h3 style="text-align: left;"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Let’s finally talk about
multiplayer</span></h3>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Nearly all
these problems are solvable. There’s <i>so much more</i> content gated in and
behind raids (in either game), or fractals, or dungeons, or even WvW. Simply
play them with a group for the intended experience. A lot of perfectly balanced
challenge, and great rewards. Just like all things should be.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">But that is
the problem. It all requires a group. Not only is my time limited, but my
social energy is exceptionally limited. Luckily things are easier with people I
know; and I really <i>used</i> to enjoy doing guild content in Guild Wars 2. Unluckily
the schedules and expectations eventually just took a toll on me. I just couldn’t
find the social, mental or even physical energy (due to sleep problems) to
always be there for the group, and fell out. People missed me, for a while.
Then life went on, and getting back became hard. Then even later many people
stopped playing, or found new groups, and there was nothing left.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Now I’d
have to find a whole new group, and find the <b>constant</b> energy for it. Or
alternatively I could look and fight really hard outside of the game, and eventually
land in less-organized pick-up groups for a single instance of some content.
But to make that happen, I’d already be expected to be master of that very content.
And be expected to talk, fluently. If I can’t do that, I can’t ever even begin
enjoy any of that gated story content, challenge or rewards. I really like the games, but would like them even more if I could play them they way I want, and all the content. This is not just the fear of missing out. This is missing out.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">In the end I’m
like Sisyphus. Forever doomed to meagre repeating work with pride and
accomplishment <i>in sight</i>, but always <i>just out of reach</i>.<o:p></o:p></span></p>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-64664975067218633322021-08-11T14:01:00.000+03:002021-08-11T14:01:01.793+03:00Top things to pursue<p><b>My
long-time readers</b> might know or guess that I struggle with anxiety about
wanting to do too many things, and that I always try stay productive even when I should relax. I was recently prompted
to make a ranked list of 20 things I’d like to pursue, and forget everything
except the top 3. I shall now combine these concepts: I’ll make the list, but
won’t forget a thing. And as everything doesn’t always have to be perfect, it
is not ranked. At least to the absolute final degree. Kek. Also, true to myself
the list is a mixed combination of ‘work’ and ‘leisure’. Because leisure is
still a serious business, and can’t be taken lightly.</p>
<p class="MsoNormal"><span lang="EN-GB" style="mso-ansi-language: EN-GB;">So anyway,
in a surprisingly small amount of time I came up with this list, which I’ll
just leave here. I feel that something important might still be missing, but
this is what I came up with. And as nothing is ever truly complete, I might
augment this one later. I'll try to leave a note.</span></p><ul style="text-align: left;"><li>Game development</li><li>Articulation and verbal skills via/and VLOGs</li><li>Gaming</li><li>Expanding social life</li><li>Embedded programming</li><li>Home automation</li><li>Television and movies</li><li>‘Home’-server, high-availability computing, serverless and modern web infra</li><ul><li>a) in the cloud</li><li>b) self-hosted</li></ul><li>Getting really good at cooking</li><li>Transition in fashion</li><li>DAW-centric music</li><li>Skill-based sports</li><li>Travel</li><li>Long- and short-range radio communication, both data and voice</li><li>Photography and videography</li><li>Demoscene music and synchronized visuals, also on a stage; performance coding</li><li>Writing</li><li>Playing tabletop RPGs</li><li>Designing my dream home together with professionals</li></ul>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-14635035784858288272020-10-15T18:17:00.000+03:002020-10-15T18:17:50.325+03:00Rambling about C# 9.0, games and networking<p><b>Back at it once again!</b> Rambling about stuff, and not really even trying to make a point. I'm warning you. What else are you supposed to do when your train is late by several hours?</p><p style="text-align: center;">* * *</p><p>I recently got an urge to test out the new C# 9.0 language features and see if they could make it easier to write state-centric games (like USG:R I blogged about earlier). TypeScript is nice, but nothing really beats C#, so this is quite exciting. With the addition of <i>data classes</i> (now called just <i>records</i>), the promise was that it would be easier to use immutability, which is one of the core principles with React and Redux.</p><p>And things did work. And they worked just like with Redux. But that's the problem. The Redux way is that each reducer produces the new state by creating a new instance of the state with the changed part replaced: <u>return {...oldState, counter: oldState.counter + 1}</u>. Very simple, and the reducers stay pure by not mutating the input values. And each state object can be safely stored in case there's need to do some kind of time travel debugging or state replaying, or anything like that. But the very huge downside of this is that it gets very messy if the state hierarchy is any deeper.</p><p>The alternative is something like ImmerJS, where the reducers are allowed to mutate the state directly: <u>state.very.deep.someArray.push(value)</u>. The library takes care of efficiently making a copy of the state object as needed, meaning that if something isn't modified, it also doesn't need to be copied either. So that 10 000 element array isn't copied each time the counter is incremented. And it works great. The code is A LOT simpler, and the performance penalty isn't actually that huge thanks to the on-demand functionality.</p><p>Imagine my dissapointment when I realized this. I'd been waiting for records for maybe few years already, before even hearing about ImmerJS. And then when I finally get them, the problem I wanted them to solve had changed... But that isn't to say they aren't a good addition, and can't be used elsewhere. But state stuff was what I was really waiting them for. For carrying simpler things - espicially events - they are great.</p><p style="text-align: center;">* * *</p><p>But back to the state games. The dream is to be able to write them in C# and surpass the productivity of React and Redux. So, what do? Why not just mutate the global state like all the other normal games. The state can be explicitly copied if need be. Well. That is a good point. Can't really counter it. (There's also some very React-like bindings for Blazor and MAUI, solving the other part of the equation).</p><p>Except if the domain is network games! If the game can be constructed in such a way that there isn't too many state changes (and preferably the state itself is small), it would be trivial to transmit those changed states over the network, and render the state in the client. And the big thing: what if we took a page out of the ImmerJS playbook, and replaced the state object on the server with one that keeps track of the concrete changes made to it? Then just the changes themselves could them be transmitted over the network. No need for expensive copying, and no need to transmit the whole state. Also no need to "manually" compute the deltas, as the data structure itself does it. It sounds so cool!</p><p>Although realisticly I'm not sure if this has ever been a problem. The states in most games should be relatively small anyway, and especially with the hardware today the deltas can be computed with just brute force by even relatively simple algorithms. Just like games like Quake 3 have been doing for ages. I highly doubt it will at any point really be that prohibitely expensive. But one can always dream of doing things better. Especially when it comes to cloud-scale and IoT, where every cycle counts.</p><p>Speaking of which. Related to the above, I've been building a general purpose framework for state based applications. I'm not sure if I'll ever really get to using it, but it's been nice coding something of my own once in a while. And using Redis always evokes warm fussy feelings :) If executed well, a framework like that might have some money-making potential. Or just be a nice tool to easily create some multiplayer games. Or just research, as always. That's the most important point.</p><p>This framework I'm making consists of an ASP.NET Core SignalR WobSocket gateway that handles user registrations and authentication (with EdDSA JWTs! <small>Although with an unsafe curve, because libraries...</small>) and then connects them to sessions that are persisted on a sharded Redis pool. The clients receive state updates and can send inputs to a session's input queue. They can also optionally see the other clients participating in the session. The session itself is managed by server applications (for the lack of a better name). They connect the the session's Redis server and take ownership over sessions assigned to them (or otherwise delegated upon them). Then they simply take inputs from the input queue and mutate the state based on the input and the current state (just like Redux), and finally publish the new state to the clients (in the future hopefully with a delta). They can also inject their own events (such as time passing) to the event queue. If a client needs to reconnect, it can simply read the current state from Redis. And if the server crashes or needs to restart, the state and the inputs are persisted in Redis, resulting in no data loss. Well, of course as long as Redis stays alive.</p><p>The key point is that this kind of architecture enables laughably easy way to upgrade the server code, as there isn't really any downside to killing the server application and then having it restart with changed code. When it starts it just reads the current state and starts processing inputs like before. Of course this could just be achieved by co-operatively shutting down the old server instance and saving the state before it closes. But that isn't any fun. And it would be extra effort to support taking over individual sessions. With that system it comes for free. Not that it would really be of that much use, but how cool is that in theory! (Each session has a host serial, and only the application server holding the most recent serial can make changes to the state. When taking over a session the serial is just incremented, invalidating the old server.)</p><p>Anyway. What I also like about this is how scalable it is. There isn't any application-specific code on the gateway and as all the clients communicate only via sessions, and one client is connected to only one session at a time. This means that it is easy to spin up as many instances of the gateway as is needed, and other than the Redis session backplane there really isn't any cross-communication between the gateways. Except of course the user registration and login. Also, the sessions themselves don't need to talk to other sessions and are completely self-contained except for the orchestration (what server instance starts serving a new session). This means that the Redis side of things can also be easily scaled by sharding the sessions by their id. And further, the server application nodes themselves can also be infinitely scaled. So should something happen and the games running on that framework became hugely popular, it is no problem architecture-wise to just spin up some more instances.</p><p>The only problem is how reliant this is about Redis. The initial prototype I've been building makes excessive use of Redis lua scripting and combines dozen operations in things like adding an input to the queue. It should be extremely unlikely, but should something go sideways during the execution on such a script, the recovery won't necessarily be easy. Although most of those operations are about checking consistency and updating expiries anyway, so it really isn't a problem. But what I am really interested about is the performance. I'm really curious to see what kind of performance charasteristics this kind of system has. Also, scalability is nice and all, but as I've talked about before, single-node performance is what is really the name of the game. Less nodes, less cost. This isn't really going to end up in that category ':D But hey, we have to remember that all's of course not that straightforward, as development time is also a factor. For once...</p><p>And this thing here is actually really simple and easy. I hope to prove it. At least to myself :d After that I'd like to test some other topologies. Dwell on all the missed performance and the system's dependency on Redis. A loose coupling between the gateway and the servers, but an even tighter coupling decoupling them.</p><p>But at least it'll be fun.</p>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-64154829599844870082020-09-06T11:17:00.011+03:002020-10-10T11:22:54.833+03:00Dreaming about Ampere<p><b>Hello again</b>,</p><p>quite a while since the last update. Again. But at this point you should know me well enough to expect it. Or maybe you just happened to read the previous post. Go figure.</p><p>Anyway.. One lesser thing I've been pondering is how me upgrading to an ultrawide display might have played a part in me not finding as much joy from gaming as I used to. Sure, this a minor thing considering all the other factors, but a factor nonetheless. Ar at least the lack of the best possible GPU is a good excuse to not play games. Irregardless of that, I've been in the process of upgrading my GPU for a while now, and I was extremely happy with how good of a product Nvidia managed to launch with Ampere!</p><p>"A great generational leap in performance", good pricing, and even the Founder's Edition models seem very good. Typically FEs themselves have been a bit lacking, but now it seems that they might actually be the better product! I'll still of course have to wait for the reviews, but I don't remember being this exicted about hardware in quite a while. It's also one of the very few things I've really been excited about the whole year.</p><p>As that's not all. I recently got myself the Valve Index VR kit. The waiting list for it was so long that I'd almost forgotten about it. Now, I've mostly been using it so get some exercise in form of Beat Saber, but I also did play through Arizona Sunshine. AS has some flaws, but the gunplay itself was rather nice. I also started Half-Life Alyx a while ago elsewhere, but haven't really gotten to play it more. This is unfortunate. Instead, I've been busy playing Minecraft, and just simply keeping busy.. This is hopefully maybe changing, but we'll see. Anyway. What saddens me a bit is how increasing the render resolution scale in AS made the game a lot more clearer, but completely killed the FPS. Maybe the new GPU will chance that! And most certainly I'll get that sweet 144 Hz on Destiny 2 again, too!</p><p>Hmh. It seems that "whiles" are the staple of my timeframes :p And while this post feels a bit lackluster, it feels good to produce some content once in a <i>while</i>. Maybe I'll even get motivated to write some development-themed post at some point, too. We can all hope, at least :) We'll see, we'll see...</p><p>And I know what you are thinking. About everything. We'll see.</p>Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-62674370363832006022020-05-11T18:32:00.000+03:002020-05-11T18:32:01.017+03:00Presenting USG:Rerolled<br />
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><b>Despite the
hardships</b>, I’ve been able to at least occasionally dedicate some time for the continued
development of this year’s Finnish Game Jam / Global Game Jam game. Worked
alone this time, and still made a nice game. With TypeScript :o</span></div>
<div class="MsoNormal">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRN2Wro9HaQfPRJWaugfG0yHjaiIcOenrOGVb-rWPc1zrlgCC4iy33mhF2KP1_xVjfzVruVC1FNdzqd4MBICOO5-ul-qLHXguKsQkUvQeM6KJGeT-1zzICdQ4mNrnTVVo23YSSo4RsLBI/s1600/sdc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="983" data-original-width="1211" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRN2Wro9HaQfPRJWaugfG0yHjaiIcOenrOGVb-rWPc1zrlgCC4iy33mhF2KP1_xVjfzVruVC1FNdzqd4MBICOO5-ul-qLHXguKsQkUvQeM6KJGeT-1zzICdQ4mNrnTVVo23YSSo4RsLBI/s1600/sdc.png" /></a></div>
<a href="https://www.blogger.com/blogger.g?blogID=7073976836848498167" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=7073976836848498167" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><a href="https://www.blogger.com/blogger.g?blogID=7073976836848498167" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Now, with
the continued effort, it’s starting to look pretty good! The game itself is a
mix of <i>Dicey Dungeons,</i> <i>Slay the Spire</i> and maybe even <i>FTL - Faster than
Light</i>. It’s about dice-rolling and loot in a turn-based combat, from encounter
to encounter. Originally it was to have gameplay that would have emulated <i>Cultist
Simulator</i> in at least some questionable way, and hence got the name <i>SDC: Slay Dicey Cultists</i>.
But then it occurred to me that this could be an excellent chance of carrying
on the torch from the project that is my unicorn, The Peli – or as more recently known, USG.
So let’s give it up for <i>USG:Rerolled</i>!</span></div>
<div class="MsoNormal">
<span lang="en-FI" style="mso-no-proof: yes;"><!--[if gte vml 1]><v:shape
id="Kuva_x0020_2" o:spid="_x0000_i1027" type="#_x0000_t75" style='width:450.75pt;
height:365.25pt;visibility:visible;mso-wrap-style:square'>
<v:imagedata src="file:///C:/Users/Omistaja/AppData/Local/Temp/msohtmlclip1/01/clip_image003.png"
o:title=""/>
</v:shape><![endif]--><!--[if !vml]--><!--[endif]--></span><span lang="EN-GB" style="mso-ansi-language: EN-GB;"><o:p></o:p></span></div>
<div class="MsoNormal">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1mNYrrGBAa-hRbDZ4NKUXbpxcWojeSxAb-1nOup21qL2F4HWYB1nKWuNfj6agSQSbbMiFNqm7JPih0cPvUrU-uyh6Qry845F9ZHRPFgK-Ms6J2NhcumWhZSb7e12jFFdX3Mp4YYHIbXY/s1600/usgr.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="984" data-original-width="1215" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1mNYrrGBAa-hRbDZ4NKUXbpxcWojeSxAb-1nOup21qL2F4HWYB1nKWuNfj6agSQSbbMiFNqm7JPih0cPvUrU-uyh6Qry845F9ZHRPFgK-Ms6J2NhcumWhZSb7e12jFFdX3Mp4YYHIbXY/s1600/usgr.png" /></a></div>
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">The main focus
of the game is the equipment, and the many effects the pieces can have. This
made me to choose to implement the effects with straight up code, instead of
trying to codify all the effects in some kind of standardized structural form. It's been a good choice for productivity. But I dread the day I need to make some kind of breaking chance. I also did some snooping, and found that this is how other games have chosen to
approach this problem, too. As the game will have (and already has) quite an assortment of equipment, it only made sense to also create and editor for the
items. And the editor. Well… I guess I’ve spent as much time on it as the game,
or something `:D</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWE33NUpJR9r3cSg74_CjiIdIoiPuA8_c89-WI5tXFwLVybqaTLvZJ89idRDI_X1AY3Zg0xHY1-KASUYfL6yWiHCOZEhJHp-NZNUkH1WVe_bRrkfaMLGgPbMbTYE-XpIBBxbnj2qMTFqE/s1600/usgr_editor.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="987" data-original-width="997" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWE33NUpJR9r3cSg74_CjiIdIoiPuA8_c89-WI5tXFwLVybqaTLvZJ89idRDI_X1AY3Zg0xHY1-KASUYfL6yWiHCOZEhJHp-NZNUkH1WVe_bRrkfaMLGgPbMbTYE-XpIBBxbnj2qMTFqE/s1600/usgr_editor.png" /></a></div>
</div>
<div class="MsoNormal">
<span lang="en-FI" style="mso-no-proof: yes;"><!--[if gte vml 1]><v:shape
id="Kuva_x0020_1" o:spid="_x0000_i1026" type="#_x0000_t75" style='width:451.5pt;
height:447pt;visibility:visible;mso-wrap-style:square'>
<v:imagedata src="file:///C:/Users/Omistaja/AppData/Local/Temp/msohtmlclip1/01/clip_image005.png"
o:title=""/>
</v:shape><![endif]--><!--[if !vml]--><!--[endif]--></span><span lang="EN-GB" style="mso-ansi-language: EN-GB;"><o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">The editor
has some standard fields for the most basic attributes of the equipment. It
also has an integrated code editor with syntax highlighting and auto-completion. This is achieved by embedding the very same text editor component (add diff viewer) that powers <i>Visual Studio Code</i>, the <i><a href="https://microsoft.github.io/monaco-editor/">Monaco Editor</a></i>.
The changes made via the editor are versioned separately from the rest of the game, and the editor has an ASP.NET Core backend for implementing the filesystem and code generation functionalities.
Upon saving the data, the game itself is automatically reloaded with the new
equipment data. I’m rather pleased of the setup. I’m planning on extending the
editor for also creating random encounters for the game to balance out all the
combat. Then there’s always some quality-of-life improvements to be done… But
overall, it’s already in a surprisingly good shape! The editor even has a graph of the saved versions and their relations...</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfMz_vZtKbAK5spYQn59jX3rGDtg07Z9dcaKIzrW3-R5bch_fBvfzvpFs17tx8LtKRXVy6shoiY32MmpKgttBvhSp4NRD0JHp0ija0MMvXVuxwg73ELCAbUmxWfIiHMbV3GEKRV4-1Xmo/s1600/usgr_editor_versions.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="499" data-original-width="1415" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfMz_vZtKbAK5spYQn59jX3rGDtg07Z9dcaKIzrW3-R5bch_fBvfzvpFs17tx8LtKRXVy6shoiY32MmpKgttBvhSp4NRD0JHp0ija0MMvXVuxwg73ELCAbUmxWfIiHMbV3GEKRV4-1Xmo/s1600/usgr_editor_versions.png" /></a></div>
</div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">The latest
addition to the game was an encounter map, which brings some structure to the
game. Compared to the work already put to the game, this was a relatively small
addition, but it did take a few days to get the SVG-based drawing and random
generation to work in a satisfactory way. Evening out the randomness would be the next step. Speaking of next steps, I’m kinda testing out if I could bring ships
with limited hardpoints to the mix, without everything getting too confusing. Then
I’d be quite close to the dream that is USG. See you next time!<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="en-FI" style="mso-no-proof: yes;"><!--[if gte vml 1]><v:shape
id="Kuva_x0020_4" o:spid="_x0000_i1025" type="#_x0000_t75" style='width:451.5pt;
height:365.25pt;visibility:visible;mso-wrap-style:square'>
<v:imagedata src="file:///C:/Users/Omistaja/AppData/Local/Temp/msohtmlclip1/01/clip_image007.png"
o:title=""/>
</v:shape><![endif]--><!--[if !vml]--><!--[endif]--></span><span lang="EN-GB" style="mso-ansi-language: EN-GB;"><o:p></o:p></span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1c5UR4P92cYkRVfkofg_n0kGRdF_35nCwxleUcdwyff1QjKFS8aQ1FTCptrlGEB65JSPdkbwLD8MZBDLqa6tuHI1tvl80tQeQ3BntGKlNHx6OJhkHAFgjHRlxsPuMpfSx0d-MMNYYbjo/s1600/usgr_map.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="982" data-original-width="1213" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1c5UR4P92cYkRVfkofg_n0kGRdF_35nCwxleUcdwyff1QjKFS8aQ1FTCptrlGEB65JSPdkbwLD8MZBDLqa6tuHI1tvl80tQeQ3BntGKlNHx6OJhkHAFgjHRlxsPuMpfSx0d-MMNYYbjo/s1600/usgr_map.png" /></a></div>
<br />
<br />
<br />
<br />Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-64528444406073309042020-03-13T10:19:00.002+02:002020-03-13T10:19:24.992+02:00Plans<br />
<div class="MsoNormal">
Everything
I planned for. Crumbling before my very eyes. And I’m not even talking about
the pandemic.</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">I kinda
thought that now that I got my studies finished, I’d get a chance to focus on
what is important and make time to <i>do</i> things. But that didn’t quite plan
out, as there surfaced some things that’ll be requiring my attention. And not just for a little while, but for
several years :( And some things that directly contradict everything.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Though it (mostly, not completely) depends on whether I think about those things or not (: It would be wise to
give some thought to one of them, but thinking about it doesn’t really help it
at the moment… Was this vague enough? :s</span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">But, as they say: it is important to try to keep sense of normality in a time of crisis. Remains to be seen how well that'll pan out.</span></div>
Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-70695385323142241872020-01-20T17:44:00.003+02:002020-03-13T15:32:40.584+02:00Nuget package creation addentum<b>A short</b> <i>and</i> informational blog text for once!<br />
<br />
In preparation to some big plans™ I changed the way I generate my Nuget packages. The old way where all the metadata was specified in the .nuspec file was otherwise good, but it didn't specify version metadata for the actual DLL.<br />
<br />
I've now changed the process so that version is specified in the .csproj file, via the <u>VersionPrefix</u> element. I've also defined a <u>VersionSuffix</u> element with a value of <u>dev</u>. This way when the project is built "locally" the assembly version ProductInfo field reads for example <u>0.1.0-dev</u>. Whereas when the Nuget package is built I pass the option <u>-Properties VersionSuffix=</u>, resulting in <u>0.1.0</u>. And as a side effect of moving the version element away from nuspec, I also had to also specify <u>-Version $version</u>, where <u>$version</u> is a PowerShell variable parsed from the csproj file and include a placeholder version in the nuspec file.<br />
<br />
While slighly complex, this now allows me to fetch the version programmatically and know whether the library is an "official" release version via Nuget, or a locally compiled version with unversioned changes. I've also considered to just inject the version together with the prefix as a build step, but for now I'll try my luck with this thing that needs the assembly to reside on disk:<br />
<br />
<script src="https://gist.github.com/Vazde/4f34e6de814aba5bd87a808332c6245a.js"></script>
PS: there's a ton of unresolved issues with the Nuget CLI program, many of them several years old. When investigating the above, I also noticed that my Nuget packages <a href="https://github.com/NuGet/Home/issues/5979">don't</a> include <a href="https://github.com/NuGet/Home/issues/4491">external</a> libraries as dependencies even when using the option <u>-IncludeReferencedProjects</u>. It is a reported bug, but hasn't been fixed. One alternative might be to use <u>dotnet pack</u> instead of <u>nuget pack</u>, but I'm not sure what other changes that would entail.Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-61395815288367803622020-01-06T17:49:00.001+02:002020-01-06T17:50:48.330+02:00InfluxDB broke, againIt is not a long time ago that I <a href="http://blog.dea.fi/2019/12/at-least-alerting-works.html">blogged</a> about how I experienced my first fault with InfluxDB. And now it has happened again. This time a bit different, though. And much worse.<br />
<br />
It all started with the familiar NATS alert, so I tried rebuilding the indexes again with <u>buildtsi</u> and thought that would be it. I was wrong.<br />
<br />
This time there was an invalid CRC on one data block, which caused segfaults(lol) within the InfluxDB executable. Speaking of error checking... the verify command in influx_inspect has a bug. It erroneusly reports a block as healthy, because the <a href="https://github.com/influxdata/influxdb/blob/1.7/cmd/influx_inspect/verify/tsm/verify.go#L83">right counter is never incremented</a>. But anyway... The block is faulty. What now? Where is the option to fix it, or remove the block? Removing the offending file manually doesn't help.<br />
<br />
So in the end this made me abandon the data, and start from scratch :'( And for some reason docker-compose kept doing some weird caching with the data (even when it was a filesystem mount), even after downing and removing the old container, and emptying the old data directory, and it was an effort to get everything working again...<br />
<br />
I should probably consider upgrading the hardware. If that is the real problem. For the lulz I also asked for a quote of InfluxDB enterprise. If I cluster it, then one node can fail and it recovers, right? Right?? But I don't expect the quote to be realistic. And even if it was, it's probably still too much. One alternative might be Apache Druid. But it, too, seems a bit too young a product.Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-32321427395393136962019-12-23T14:56:00.000+02:002019-12-23T14:56:21.888+02:00I graduated<b>1.5 months ago</b>. I guess it should feel like something? Clearly not, as I've been in no hurry to blog about it in more depth. And it only took 8.5 years :d<br />
<br />
That is also probably the reason. I've been working almost full-time for about 4.5 years now, and my studies were already somewhat complete even when starting to work. During that time they just progressed slowly and sluggishly, but relatively surely. The two larger challenges were passing swedish and then of course the thesis. Everything has just been one large marathon, with the goal of just reaching the finish line.<br />
<br />
And now I did. Sure, it's an acomplishment, but is it really anything that special? The goal was always there and reaching it didn't really come as a surprise; there was no great and overwhelming feeling reaching it. The race just ended, and there was little more to it than that.<br />
<br />
<div style="text-align: center;">
* * *</div>
<br />
But actually, there is a bit more. In order to get to that let's talk a bit about the thesis itself. Like my Bachellor's, Master's was about something that I was researching at the time. For many years I've been interested in backend development. Now everything's finally giving so much return of interest functionality-wise that it's worth to invest in deployment. What good is it to be able to write backends with only little effort when making them available and easily accessible is uncharted territory.<br />
<br />
So for my thesis I wanted to take a look into the process that went to creating backends in a way that made them easier to deploy. I didn't want to chomp on a piece too big, so I purposefully didn't seek an automated deployment pipeline. I knew better than that. But the real friends are the ones we made along the way, so buckle up ;)<br />
<br />
I had previously written a simple restaurant menu parser for my Nokia e51 in PHP. When the page was requested, the server fetched the menus from preset restaurants and rendered a slimmed down HTML-only page with just the menu contents. This was very advantageous compared to opening the site(s) of the restaurants manually and waiting for needless data transfer over a slow mobile connection, and then the slow rendering of complex content and scripts on the low-power mobile device.<br />
<br />
Anyway. When the parser was in need of an upgrade I ported it to this new thing called .NET Core and hosted it on a now-retired iteration of usva. Then at one point I also dockerized it and made it run on Azure. This was done because that was the only application running on usva, and now I could do other things with the server (for example turn it off :p). But anyway. That was the first step on the journey of improved hosting.<br />
<br />
Several years passed till the thesis. I needed an application to use as the base for it, and after making a list of possibilities the menu parser was kinda the only reasonable choice. Other alternatives would have been far too work-intensive in their implemenation itself. I needed a tried concept so that I could focus on just researching deployment and hosting (and common design things). But of course I couldn't not upgrade the application, so for the thesis I took the menu parser logic of the old system, and wrote a whole new site, including a history database for the foods and even some user-specific stuff as an example. I also added an admin API and some other features such as centralized structured logging.<br />
<br />
While the academic goal was to present some design guidelines for containerized applications, the pratical goal was to create the basic building blocks that could be later used for more involved DevOps concepts, namely build and deployment automation. At the start I was not sure how these would end up looking, or even if I'd ever get to see them.<br />
<br />
But ultimately I ended up with something I can be quite satisfied with! I could use the Dockerfile to automate building the application, and then further use Docker compose to define the runtime: namely the exposed ports, configuration and the whole set of services (application, updater-parser, database and log storage). What still remains to be improved is how the database is used, as there remained some manual steps for database migrations. But for everything else the process was automated quite far.<br />
<br />
The goal was to simplify hosting and deployment, and it really did get a lot simpler. Just one command to build the app, another to copy it over to the target server, and one final command to update the running instances. But what is even more imporant is that it is the same three commands no matter the app. This opens the way to efficiently build automation in the future by replacing just those commands. But it also makes the manual work easier.<br />
<br />
<div style="text-align: center;">
* * *</div>
<br />
The work done on the thesis is just a part of a larger continuum about backend development. While the thesis (and graduation in general) is a great step, the work never ends. And I'm already busy with the "next" steps - as indicated by the surprisingly many blog posts as of late. There is always so much more work to be done to get better. This same train of thought extends to my other hobbies, too. In the shadow of the continuum, there just doesn't seem to be any time to feel the pride and acomplishment of the past.Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-69468000637385341872019-12-17T21:33:00.000+02:002019-12-23T14:56:51.179+02:00At least alerting works...I've also done some work in setting up infrastructure monitoring. There's still work to do - and do better - but at least I have one alert. And it works.<br />
<br />
I think I might have just been thinking of doing something more leisurely, but Grafana sent me a Telegram message that there was something wrong with NATS response times. I open the link and see that there is no data, meaning that the instance is probably down. But there's also no any other data. Fuck. Everything going up in flames this soon?<br />
<br />
But wait a sec. There is data, momentarily. Then it disappears again. I bring up logs for InfluxDB and and see an error "panic: keys must be added in sorted order". I spend quite a while trying to figure out what exactly is wrong and how to proceed, almost giving up. It seems that lot of the tooling for fixing and managing the files has been removed or made internal-only. But then I find an up-to-date guide for <a href="https://docs.influxdata.com/influxdb/v1.7/administration/rebuild-tsi-index/">rebuilding the index</a> and decide to try it.<br />
<br />
Because my installation is dockerized, and there seems to be some issues with the rebuild command, I had to chown the data directory to "some user", and then run the repair command, and then chown the files back. And yay! It works again. For reference, the docker command I used: <u>docker run --rm --user 1000 -v /path/to/influxdb-data/:/data influxdb:1.7.9 influx_inspect buildtsi -v -datadir /data/data -waldir /data/wal</u><br />
<i><br /></i>
At least least it works again. But just as I though everything was going nicely... Maybe the problem is the server itself? It served as my desktop earlier, but I moved away from it due to constant crashes with GTA V, and much more rare crashes other times. Maybe I have to invest in some proper hardware :o We'll see. Maybe it'll work again without issue for a long time. Pls :sVazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-16843237981820364292019-12-17T18:07:00.000+02:002019-12-18T11:39:16.666+02:00Testing the stack with CircuitPython<br />
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><b>Like I
stated</b> earlier, one of the reasons why I’ve now been so much about improving my
stack is the fact that I backed Meadow F7 a while back. I kinda want to maximize
my productivity with it, so I’ve been doing what I can beforehand (and while the
mood lasts).</span></div>
<div class="MsoNormal">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiS7jSxOlRDxGcyCc3YLlIMouoSE5vbXzW8AV_eVH4VpxI3EOIjbs2-nzQNOCCW_ycr8Im3ku7WNRmG3eRh__4wLSCdObIBjsUAMlc6NrEbfF67pROvvPVFskZJd_WqWTjVz6YH28iYx3Y/s1600/1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="998" data-original-width="1331" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiS7jSxOlRDxGcyCc3YLlIMouoSE5vbXzW8AV_eVH4VpxI3EOIjbs2-nzQNOCCW_ycr8Im3ku7WNRmG3eRh__4wLSCdObIBjsUAMlc6NrEbfF67pROvvPVFskZJd_WqWTjVz6YH28iYx3Y/s1600/1.png" /></a></div>
I already
talked how this work included setting up Grafana and other data collection
facilitations. I also ‘teased’ about using NATS for registering some
long-running ad-hoc jobs. I’ve been calling the NATS-based <i>thing</i> Sumu. More about it later. But anyway, now these were put to a somewhat unexpected test when I ordered bunch of preparatory
stuff from Adafruit and got a <a href="https://learn.adafruit.com/adafruit-circuit-playground-express">Circuit
Playground Express</a> as a freebie.</div>
<div class="MsoNormal">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdLrvhOm_IrPYKihWjDKueyQ1Vygo5IQ0Ebg5yEAK2KuAE7r8nyeXLtsJl0k4Ejq5cy4pMXzhYFdADxGPlHtE7nbwcYIo6Le_nmYais-aVx_QuxwlxJ9Tg4haSsTQ9XL4gyclUf8hmS70/s1600/2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="656" data-original-width="1600" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdLrvhOm_IrPYKihWjDKueyQ1Vygo5IQ0Ebg5yEAK2KuAE7r8nyeXLtsJl0k4Ejq5cy4pMXzhYFdADxGPlHtE7nbwcYIo6Le_nmYais-aVx_QuxwlxJ9Tg4haSsTQ9XL4gyclUf8hmS70/s1600/2.png" /></a></div>
I’m trying
to keep this post short, so I’ll just state that it can’t really be any simpler
to get some samples running with it. Basic documentation is very thorough and
there’s a bunch of sample code available, and with a bunch of sensors included
in the board itself. As kind of an embedded Hello World I moved to the embedded
temperature sensor after blinking a led and <a href="https://learn.adafruit.com/adafruit-circuit-playground-express/playground-drum-machine">playing some drum samples</a>.</div>
<div class="MsoNormal">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlaG5U-v1-KcvFbzcv8q8M0uWOY6y6IPScOQlDmYm0ZQ-DKCXICWHmkWn4FpZUMMMzQaldHWbYxUQZryWjbPYp1XlyK3vhzYLlG6CJm2X3NMAYiqcV6DvKTuGviknE8Non3onO_p25mp0/s1600/3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="119" data-original-width="424" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlaG5U-v1-KcvFbzcv8q8M0uWOY6y6IPScOQlDmYm0ZQ-DKCXICWHmkWn4FpZUMMMzQaldHWbYxUQZryWjbPYp1XlyK3vhzYLlG6CJm2X3NMAYiqcV6DvKTuGviknE8Non3onO_p25mp0/s1600/3.png" /></a></div>
</div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">The code on
the device reads the temperature once a second, and prints the averaged value
every 10 seconds over the serial console. On PC I have a LinqPad script registering
itself to Sumu (with a health check), reading the serial console, and pushing
data to Postgres while also serving the current (or next) temperature value via
Sumu to browsers (via Node-Red). There’s even error handling and retrying in
case the serial console gets disconnected for some reason (for example the
device is unplugged). All this in 180 lines (with majority of it being serial
console stuff :p) and maybe an hour or two! Feeling good about things!<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Let’s just
hope that this is just the beginning, and not the peak :s<o:p></o:p></span></div>
<br />Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-76170777402372663752019-11-28T11:05:00.002+02:002020-01-20T17:45:37.036+02:00How to create a Nuget package<br />
<div class="MsoNormal">
<span lang="EN-US">Like mentioned in a <a href="http://blog.dea.fi/2019/11/my-own-private-nuget-feed-and-story.html">previous blog post</a>, I’ve
been looking into creating and hosting Nuget packages (the package management
system for .NET). While there still is a lot of things I don’t yet have
experience with, I feel confident enough to share the basics of it. But mostly because now I have the process written down for me, myself and I<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Start by creating a new class library with
<u>dotnet new classlib</u> and implement whatever the library should do. At least for
simpler libraries there really isn’t any Nuget-specific considerations. You can
also start by installing the nuget-binary somewhere in your <u>PATH</u>.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">After the implementation is done, it is
time to add the Nuget-specific metadata. This information can be specified in
two ways. Either by independent nuspec-files, or via the csproj-file on the
actual project. I tried to use just the project files, but ultimately this
proved to be a problem due to limitations of the Nuget CLI. If a project
references another, and that other project doesn’t have a nuspec-file, then it isn’t
added as a dependency, but instead the binary is directly included in the first
package. So, in the end I though it was easier to just define all the metadata
in a single file, the nuspec-file. Here is an example:</span><br />
<span lang="EN-US"><br /></span></div>
<script src="https://gist.github.com/Vazde/35b6df0734f26c36d67d3323d06e80e2.js"></script>
<br />
<div class="MsoNormal">
<span lang="EN-US"><br /></span>
<span lang="EN-US">This file should be named similarly to the
package id and project file. So in this case <u>Dea.WebApi.AspNetCore.nuspec</u> and <u>Dea.WebApi.AspNetCore.csproj</u>.
To generate the package containing the binary of the project and all
dependencies correctly marked, use a command in form of <u>nuget pack -Build
-IncludeReferencedProjects Dea.WebApi.AspNetCore.csproj</u>. Even though the
argument is the csproj-file, the nuspec-file is also automatically used, as it
is similarly named. This also applies to the linked projects and their nuspec-files.
If you want to make debugging easier, you can also add <u>-Symbols</u> flag to the
command.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Pushing this package to a repository is
easy if you already have one set up (like described in the previous post): <u>nuget
push -Source DeaAzure -ApiKey az Dea.WebApi.AspNetCore.0.2.1.symbols.nupkg</u><o:p></o:p></span></div>
<br />
<div class="MsoNormal">
<span lang="EN-US">I was going to rant here about how
Nuget-package and Azure Artifactory break the debugging experience as there
really isn’t an easy way to get the symbols working. But it turns out that
everything supports them after all. It was just a matter of including them, and
being sure to change the format from Portable to Full. Changing the PDB-format
is usually one of the first steps I make when creating a new project, but for
some reason forgot to do it this time. Now everything just works :) Though I
would have preferred to have the symbols hosted separately so that not everyone
who has access to the packages has access to the symbols, too. Not that it
really matters, as I probably have to use Debug-builds anyway, and it is so easy to decompile the files back to source. And maybe some packages even have the source code available anyway...<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">But this is actually one great unknown. Is
there a way to nicely host (and consume!) different flavors of packages? One
with an optimized release-build, and another one in debug mode with all the
symbols included. A quick search didn’t reveal a simple solution for this.
Maybe we’ll never know.</span><br />
<span lang="EN-US"><br /></span>
<span lang="EN-US">Edit: also see the <a href="https://blog.dea.fi/2020/01/nuget-package-creation-addentum.html">addentum</a> about versioning.</span></div>
Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-59162272915429906042019-11-26T12:47:00.000+02:002019-11-26T12:47:15.033+02:00Venting about Android development and push notifications<br />
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">As part of
the ongoing effort to modernize my technology stack, the question on
notifications constantly surfaces. Today the de facto solution for them on Android
is to use Firebase Cloud Messaging (FCM). As I’m trying to use C# for
everything I’ll be using Xamarin.Android. Microsoft has <i>relatively</i> good <a href="https://docs.microsoft.com/en-us/xamarin/android/data-cloud/google-messaging/remote-notifications-with-fcm">documentation</a>
on FCM, so I won’t be repeating it. Instead, this post is primarily about
venting about Android development, even after Xamarin makes slightly more
bearable. This post it not to be taken as general indication of, well,
anything.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">* * *<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">So. I
create a new basic Android application. I try to register a new project in Firebase
and add my server as a client application, but Google’s documentation is out-of-sync,
so I don’t find what I’m looking for. I finally manage so stumble to the right
page and can get to work.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">I create a
scaffolding for my notifications server and try to send the device token to it
from the mobile application. It fails. The documentation uses a deprecated API,
but that’s not it. Okay. I’ll just look at the logs. But hah, fuck that. I didn’t
start the application via the debugger, so I don’t get to filter the logs by application.
I have to use the command-line version of logcat, and instead I’m flooded with
messages from AfterImageCompositionService and touch debugger and friends. I
can’t even filter the logs sensibly by my log tags, because they were changed
years go to be limited to a something like 23 characters. But I finally find
the error: I can’t use plaintext HTTP anymore on Android by default. </span>Not even
on local network! The “correct” way would seem to define some kind of complex security
policy, so I just go for the alternative of slapping <span style="font-family: Courier New, Courier, monospace;">usesCleartextTraffic="true"</span> to the manifest.</div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">And things
work, yay. I can send notifications, and they appear on the phone without even
having to have the application running! But then, as stated in the
documentation, I try to send them when the application is running. And they won’t
work. So I have to manually construct the notification (though, as stated in documentation),
and hope that it matches is appearance to the “built-in” one.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">And then
the documentation starts to fall apart. Starting with just minor things, like
not stating that icons should have an alpha channel, or otherwise they appear
as blank squares (docs were written for older Android). Then larger things, like
when with the otherwise very clear testing instructions there are suddenly no
instructions to test those foreground notifications. Well, that is because they
won’t work if just following the documentation. And by now I’ve already forgotten
what I had to do to somewhat fix them.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">And testing
out the code creating the notification needs few more iterations than I’d be
comfortable. Building, deploying and running the application takes actually
quite a bit of time, even when I have the relevant acceleration settings enabled.
That is, using shared runtime and only deploying changed modules of the
application. It also bugs me that the shortcut for the application keeps
occasionally disappearing from my launcher. Then I finally look into this more,
and find out that Xamarin, at least on Visual Studio 2019, doesn’t honour those
settings, and just uninstalls and install the application every fucking time.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">But oh, it
gets better. As I try to test both background and foreground notifications, I
have to occasionally close the app so that it is no longer running. I do this
by pressing the “Close all” button on my Samsung phone. And the notifications
won’t work. Little did I know that there was some OEM-fuckery at work. Apparently
closing the application this way is the same as killing it (at least as far as OS
state-keeping is concerned). This sets a special flag indicating that the app
was killed, and the OS-level Firebase service <a href="https://stackoverflow.com/questions/39480931/error-broadcast-intent-callback-result-cancelled-forintent-act-com-google-and/53404817#53404817">won’t
deliver messages to applications that have been killed</a>. vittumitäpaskaa<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">After I’ve
regained my composure I move to the next feature on the list. Showing an image
attached to the notification. The <a href="https://firebase.google.com/docs/cloud-messaging/android/send-image">documentation</a>
states that this is as simple as setting the image URL in the notification payload
to a valid HTTPS image. Firebase console even shows a preview of the image! <o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">The
documentation was wrong. I spend one full day trying to get the image to work,
and it still doesn’t work. I tried doing it in so many different ways, via both
generic and Android-specific settings, via the old and the new Firebase API,
via managed code and by hand, via my server or directly, or via the Firebase
console. Nothing works. I’m starting to suspect that might be Xamarin’s fault
at this point. Couldn’t really find any documentation confirming or denying it.
Couldn’t even find any documentation how the Firebase library is supposed to work
at application-level on plain Android. Is it really operating-system level, or
is it just some code included by the library to the application, and then it
really is the application showing the notification even when it wasn’t running?
And maybe with Xamarin no-one bothered to implement the code for showing the
image. Maybe I’ll never know. I could create a new Android Java application,
but I really can’t be bothered to do it… Thanks to C#, I’ve started to hate the
clunky syntax of Java and other tooling more and more.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">So many unnecessary
hardships for trying to get even the basic application working… Next is the fun
of trying to keep the device’s Firebase token in sync with the server. Apparently,
it can occasionally change, and must be sent to the server again. But what if
the network doesn’t work right then? I’ll have to manually build to scaffolding
to manually schedule background work and keep it retrying until it works.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">And then
finally I get to implementing more server-side stuff, like removing the device
token from the server should the application get uninstalled. Or delivery
notifications! Those would be where things start to really get useful! Maybe I
can also start to implement notification storage to the device, so that I can
filter notifications by their read-status, and not show notifications that were
resent due to network errors. And then maybe implement some kind of caching and
stuff, and data messages. Then I’ll probably have to implement downloading and
showing attached images myself, but on the other hand then it should finally
start working. And then have a nice application that has a GUI list showing both
all the past notifications, but also those that are unread. And then the general
server-lead notification system can be expanded to span other devices, too.
Like lightbulbs! And the add some service health notifications from other
services I’ve been planning or implementing. And general ones too, like package
or price tracking.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">* * *<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Package
tracking was actually the reason I finally looked into this topic. Matkahuolto
didn’t have notifications when new info appeared in tracking, like some other
companies do. So, I made them myself. And despite all those hardships, I
actually got the notifications and tracking working quite OK. Then the funniest
thing happened. Matkahuolto <a href="https://twitter.com/routaverkko/status/1194324011095863296">updated their
site</a> like an hour after I got everything working :’D Had to make a quick fix
to the script. Luckily things got easier, as I didn’t have to parse HTML, and could
get JSON instead. Good thing I included error handling to the script. If the
crawl failed 10 times in a row it would send an error notification. And now I have
the LG OLED65E9 TV I ordered :3 Perhaps more on the HDMI 2.1 4k120 fun of the
TV later!<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Like I
mentioned earlier, I’m looking to expand to using serverless, so that I won’t
have to use more error-prone cronjobs and such. I’ve also never ran C# code via
cron, and everything has been in Python still. So serverless would enable me to
write these in C#. But there seems to be a partial solution for this problem in
form on LinqPad. I’ve been longing for a serverless platform that made code as
easy to execute as LinqPad. So why not just use it? I even have a Windows PC
constantly running. Of course there is the problem of scripts still maybe
failing randomly and without any automatic restart, but there exists a partial
solution in form of monitoring. I’ve been building some general purpose service
and service discovery code on top of NATS, and maybe that solution could be
used for that, too. A LinqPad script could register itself to that service
discovery / health checking system, and then I’d get notifications if the
script failed in some way! Maybe more of that, too, later. Later as always.
Well, got to have plans? An endless list of things to do, so that I can never feel
satisfied from having done everything.<o:p></o:p></span></div>
<br />Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-29123504249181273672019-11-25T11:30:00.000+02:002020-01-02T20:18:33.090+02:00My own private Nuget feed (featuring packages for simpler web APIs): the brave new world might really be a thing?<br />
<div class="MsoNormal">
<b><span lang="EN-US">Wow</span></b><span lang="EN-US">, what an
unexpectedly productive end-of-the-year this is becoming! A blog post, again!
Let’s just hope that this is the new normal, and not just something preceding a
less-than-stellar episode.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US">* *
*<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Like I’ve begun to outline in the earlier
posts, I’ve been rethinking my “digital strategy”. It is a pretty big thing as
a whole, and I’m not sure if I even can sufficiently outline all the aspects
both leading to it and executing it. But I shall continue on trying to do it.
This time I’m focusing on sharing code. Jump to the final section if you want
to skip the history lesson and get to the matter at hand.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">As some of you might know, I started
programming at a very early time, relatively speaking. I was self-taught, and
that time PHP was starting to be the hot stuff, whereas JavaScript was just
barely supported by widely-used browsers. No-one even imagined DevOps because
they were too busy implementing SQL-injection holes to their server-side
applications which they manually FTP’d over, or developed directly in
production, also known as “the server”. And I guess I was one of those guys.<o:p></o:p></span><br />
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Then I found Python and game programming.
And as everyone should know, there really isn’t that much code sharing in game
programming. At least if you are trying to explore things and produce things as
fast as possible without too much planning (not that this doesn't to apply to everything else, too). Python was also good for writing
some smaller one-off utilities, snippets and scripts for god-knows-what. It
could also be adopted to replacing some PHP-scripts. But then I found that the
concept of application servers exists, opening up the way for a completely new breed
of web applications (with some persistance without a database, also EventStream and WebSockets). There was so much to explore, again.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">I was satisfied with Python for a very long
time. Then I bumped to Unity3D, where the preferred language is C#. A language
much faster than Python, at parts due to strong typing. I wasn’t
immediately a fan: it took a few years to affect my other programming projects. After that I finally acknowledged all these benefits, and for about 3-4
years now, I’ve been trying to (and eventually succeeding to) shift from writing
all my programs, scripts and snippets in Python, to writing them in C#. This was also catalyzed by that fact that I worked on a multi-year C# project at work. And now
I can’t live without strong typing and all the safety it brings. And
auto-completion! The largest, and almost only, obstacle was related to the ease
of writing smaller programs and scripts. With Python I could just create a new
text file, write a few lines, and execute it. Whereas with C# this meant
starting up Visual Studio, creating a whole new project, writing the code,
compiling it, finding the executable, and then finally executing it. The effort
just wasn’t justified for snippets and small programs. But for something
larger, like reinventing Tracker once again, or for extract-transform-load
tools, it began to make sense. (And games of course, but at this time they
didn’t really happen anymore, as I was too focused on server-side.)<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Then this all changed almost overnight when
I found LinqPad. It provided a platform to write and execute fragments of C#
code, without the need to even write them to disk (a bit like with Python’s
REPL). This obviously obliterated the obstacle (khihi) I was having, and opened
a way to start writing even those tiniest of snippets in C#, while also
allowing me to use the “My Extensions” feature of LinqPad together with the default-imported packages </span>to share some code between all these programs. And of course I could just individually reference a DLL in each script.</div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US">* *
*<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Parallel to this is also maybe the fact
that I was now starting to get so experienced, that I had already “tried
everything” and what was left was to enjoy all this knowledge, and apply it to
solving problems :o<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">With Python it never really got to a point
where I “had” to have larger amounts of shared code or needed to package it
some way. There were few lesser instances of it, but I was able to solve it in
other ways. But now with C# things have continued to evolve, and the need for
sharing code between projects, programs and snippets is becoming a reality.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">For why this is only happening now, there
were multiple aspects at play. As I said, almost everything was always
bleeding-edge, and there was no need to version things. In part this was due to
the fact that there really wasn’t any actively maintained code <i>actively</i>
running. Everything was constant rediscovery, and as such there really wasn’t
any code to share. Rather, everything was constantly forked and improved upon
without thinking about reuse too much. Good for velocity. Also, as programming
really happened on only one computer, code could be shared by just linking
other files. Code running elsewhere was either relatively rare, or in
maintenance-only mode.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">With age and experience this is now
changing. Or, well… That, but also the fact that I’ve gotten more interested in
smaller and more approachable online-services - services which have the ability
to create immediate value. A bit like in the old times! In order to make those services dependable, a more disciplined approach is needed. While Tracker will
always have a special place in my heart, there seems to be better things to spend
my time on. At the same time another strong reason is that I find myself creating and executing code on
multiple computers and platforms, making filesystem-based approaches
unsustainable.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US">* *
*<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">All this has <i>finally</i> pushed things
to a point where I should really be looking into building some kind of
infrastructure to share code between all these instances. For a time, I
leveraged on the ability of LinqPad to reference custom DLLs and was quite
happy building per-project utilities with it. But now my development efforts
seem to be focused all over one smaller slice of the spectrum, with multiple
opportunities to unify code. This is especially important considering the fact
that I’m also looking to make it more viable to both create and execute code on
multiple environments, be it my laptop, work computer or a server. In my
master’s thesis researched on using Docker Compose to simplify running code on
servers. As indicated in the earlier blog post, I’m now trying to continue on
that path and make that task even simpler by utilizing serverless computing.
With both Docker and serverless it becomes somewhat impossible to easily and
reliably consume private dependencies from just the filesystem.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Instead, these dependencies should be in
some kind of centralized semi-public location. And a private Nuget-repository
is just the thing! As outlined in these new blog posts, my idea is to build
some kind of small, easily consumable private ecosystem of core features and
libraries so that I can easily reference these shared features and not
copy-paste code and files all over the place. I started the work of building
this library a tiny tiny while ago, but quickly decided that I should go in all
the way. <b>I’m not saying it will be easy, but it should certainly be
enlightening.</b><o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">The first set of projects to make it to the
repository are those related to building web APIs. While I’ve on many occasions
planned on using tools like gRPC, sometimes something simpler just is the way
to go. I’m not sure how thoroughly I’ve shared my hatred for REST, but I like
something even simpler if going that direction. This set of projects is just
that. A common way to call HTTP APIs the way I want. Ultimately this comes down
to one single thing: wrapping application exceptions behind a single HTTP
status code and handling them from there. How else am I supposed to know if I’m
calling the wrong fcking endpoint, or if the specific entity doesn’t happen to exist??! Although, as we all know, writing HTTP libraries is a path to suffering. But maybe one with a narrow focus can succeed?<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">Anyway, maybe my impostor syndrome gets something
out of this. Or not. It seems that <i>everyone else</i> is releasing public packages
and open source code. But I am not. But maybe, just maybe, this will eventually
change that thing.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US">* *
*<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">The task of actually setting up a Nuget
repository ended up being easier than I’d like to think. As the first reaction
I thought about using some kind of self-hosted server for it, as I had had much
success with setting up servers for other pieces of infrastructure lately.
Unfortunately, the official Nuget.Server-package one was a bit strange, and I
wasn’t sure if it even ran without IIS. I also didn’t want to spend too much
time searching for alternative open-source implementations. So, I decided to
try the cloud this once! Microsoft has a free 2 GB version of Azure Artifacts
for hosting not only Nuget-packages, but also packages for Node.js and Python.
I decided to test it, and then migrate to a self-hosted one should I need to
save on hosting costs in the future.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">As I wanted a private Nuget feed, I first
created a private project (or team?), and then set up <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=preview-page">Personal
Access Tokens</a> so that I could automatically fetch and push packages with my
credentials. To actually use the feed with the Nuget command-line tool, I ended
up <a href="https://docs.microsoft.com/en-us/nuget/consume-packages/configuring-nuget-behavior">adjusting</a>
my local per-user NuGet.Config file. I think that I also had to first install the Azure Artifacts credential prodiver using a PowerShell script. But anyway. This way I should now be able to use the
packages in any new project, without having to explicitly reference the
repository and access tokens: <u>nuget sources add -Name DeaAzure -Source
"https://pkgs.dev.azure.com/_snip_/index.json" -Username anything -Password
PAT_HERE</u>. I had to manually add to feed to LinqPad, but that was just as easy:
<u>F4 -> Add NuGet… -> Settings</u>.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">For actually creating the packages I just
used Google and followed the first (or so) result. This means I wrote a nuspec-files
for the target projects, built them manually and then used <u>nuget pack</u> and <u>nuget
push</u>. I also wrote a script automating this. But, after I later tried doing the
same with a newer version of the nuget CLI I got some warnings. It seems that it
is now possible to specify the package details directly via csproj-files. Apparently
this should also simplify things a bit: now I manually add the generated DLL to
the package, the newer way might do it automatically. It did feel a bit strange to manually how to know how to include the DLLs to the correct directory it the nupkg. I’ll try to blog about
the updated version of this process in more detail later (edit: <a href="http://blog.dea.fi/2019/11/how-to-create-nuget-package.html">here</a>). That is, after I get to implementing it.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US">While I was relatively hopeful about this whole
thing above (though the last paragraph kinda changed that trend), I’m actually not
sure what kind of things I’ll have to do when I want to use the feed with
dockerized builds. I might have to bundle the config to each repository.
Remains to be seen. It will also be interesting to see how my workflow adapts
to both using, but also for releasing these new Nuget-packages. How about
debugging and debug symbols? Will there be build automation and tests? How is
versioning and version compatibility handled? Automatic updates and update
notifications? Telemetry? For a moment everything seemed like it was too easy,
I’m glad it didn’t actually end that way! /s<o:p></o:p></span><br />
<span lang="EN-US"><br /></span>
<span lang="EN-US"><b>Edit:</b> it seems that at least the debug symbols of my own packages work directly with LINQPad when using Azure Artifacts. For some reason I did have to <i>manually</i> remove the older versions of some of my packages from LINQPad's cache from the disk in order to use the symbols I included in the newer packages of mine (the older ones didn't include symbols, yet the newer one had a different version number). But after that they worked :)</span></div>
<br />Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-91400851243728989922019-11-22T14:26:00.000+02:002019-12-17T17:38:46.800+02:00Weekend project: Destiny 2 account tracker (feat. improved metrics infrastructure)<br />
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Following
the theme set by the previous post, I have continued my pursuit for improved
infrastructure. Not that it really is anything special yet. Just more services
with almost default config. But the idea here is that these services will form
some kind of stable core for many other services to follow, and hopefully
evolve over time to become even more dependable. At this point they are to
“just be out there” and usable in order to test new ideas.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">One of
these ideas is about finally upgrading how I might collect timeseries data.
Over the years I’ve had several tiny data collection projects, each
implementing the storing of the data different ways. I’ve already <i>reinvented
the wheel</i> so many times and it is about time to stop it. Or at least try to
do it a bit less :p Also, in the previous stage I installed MongoDB, so for
this stage I thought that it is about time to also install a relational
database, and PostgreSQL has been my absolute favourite on this front for a
while now.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Meanwhile,
after doing a tiny bit of research on storing timeseries data I found
TimescaleDB. And what a coincidence, it is a PostgreSQL extension! I think that
we’ll be BFFs! That is, after it supports PostgreSQL 12... I wanted to install
the latest version of Postgres so that I get to enjoy whatever new features it
has. But mostly because I’d be then avoiding a version upgrade from 11 to 12,
had I chosen to install the older version. Not that it would have probably been
a big problem. Anyway, the data can be easily stored in a format TimescaleDB
would expect, and it shouldn’t balloon to sizes that absolutely require the
acceleration structures provided by TimescaleDB before I it is upgraded for 12.
Rather, this smaller dataset <i>should</i> be perfectly usable with just a
plain PostgreSQL server. Upgrading should be just a matter of installing the
extension and running a few commands. Avoiding one upgrade to perform another,
oh well…<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">In the far past
I’ve used both custom binary files and text files containing lines JSON to
store time series data like hardware or room temperatures. More recently I’ve
used SQLite databases to keep track of stored energy and items on a modded
Minecraft server (Draconic Evolution Energy Core + AE2, OpenComputers lua
script, Dockerized TCP host (there was not enough RAM <span style="mso-spacerun: yes;"> </span>in the OC computer to serialize a full JSON
in-memory)). I should try to add some pictures if I happen to find them…<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">For
visualizing the data in the past I’ve used either generated Excel sheets or generated
JavaScript files with whatever visualization library I found. Not very nice.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">* * *<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">But let’s
get to the point, as there was a reason I wanted to improve data collection this
time. I finally got around to checking out the API of Destiny 2 in more depth,
and built a proof-of-concept of an account tracker.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">For those
poor souls that don’t know what Destiny 2 is, it is a relatively multifaceted MMOFPS,
and I’ve been playing it since I got it from a Humble Monthly (quite a lot :3).
As with any MMO, there is a lot to do with almost endless amount of both short-
and long-term goals. It made sense to build a tracker of some sorts so that I
could feel even more pride and accomplishment for completing them. And in case
of some specific goals, in order to maybe even see what strategy works the
best, and what doesn’t. The API provides near-realtime statistics on great many
things, and it would be nice to also be able to visualize everything at real
time.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">To
accomplish this I needed multiple things: authenticate, get data, store data
and lastly visualize the data.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<b><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Authentication</span></b><span lang="EN-GB" style="mso-ansi-language: EN-GB;"> in the API is via OAuth, so I
needed to register my application on Bungie’s API console and set up a
redirection URL for my app. After this I could generate a login-link pointing
to the authorize endpoint of Bungie’s API. This redirects back to my
application, containing a code in the query string. This code can then be posted
as form url encoded to Bungie’s token endpoint. This endpoint requires using
basic authentication with the app’s client id and secret. After all this the
reply contains an access token (valid for one hour) and an url to call in order
to refresh the token (valid for a few months, but reset on larger patches). The
access token can then be used to call the API for that specific account. This
would probably be a great opportunity to opensource some of the code… </span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Speaking
of which, there already exists some open-source libraries for using the API! I
didn’t look into them yet, as I was most unsure about how the authentication
would work. I guess I should now take a look.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">The process
of figuring out how the authentication works contained quite a bit of stumbling
in the dark. The documentation wasn’t that clear at all steps, although at least
it did exist. On the other hand I’d never really used OAuth before, so there
was quite a bit of learning to do.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">This also presented
one nice opportunity to put all this infrastructure I’m building to good use! As
part of the OAuth flow there is the concept of application’s redirection URL,
but in case of a script there really isn’t any kind of permanent address for it.
So what do? I didn’t yet implement it, but I think that a nice solution for
this would be to create a single serverless endpoint for passing the code
forward. While I haven’t yet talked about it, I’m planning on using NATS (a
pub-sub broker, optional durability) for routing and balancing many kinds of
internal traffic. In this case an app could listen to a topic like /reply/well-known/oauth-randomstatehere.
When the remote OAuth implementation redirects back to the serverless endpoint,
it publishes the code to that topic, and the app received it. All this without
the app needing to have a dedicated endpoint! It seems that someone really
thought things through when designing OAuth. And as a bonus that code is
short-lived, and must only be used once, so it can be safely logged as part of
traffic analysis.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<b><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Reading
game data</span></b><span lang="EN-GB" style="mso-ansi-language: EN-GB;"> is just a
matter of sending some API requests with the access token from earlier, and
then parsing the results. At the moment I am only utilizing a fraction of what
the API has to offer, so I can’t really tell much. So at the moment this means
the profile components API with components 104,202 and 900. This returns status
of account-wide quests and “combat record” counters, which can be used to track
weapon catalyst progression. I’m reducing this data to key-value pairs. Each
objective has a int64 key called “objectiveHash”, and an another int64 as the
value. The same goes for the combat record data. At the moment I'm using a LinqPad script that I start when I start playing, but in the future I'd like to move this to be a microservice. This service could ideally poll some API endpoint to see if I'm online in the game, and only then call the more expensive API methods. Not that it would probably be a problem, but I'd like to be nice.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<b><span lang="EN-GB" style="mso-ansi-language: EN-GB;">Data is
saved</span></b><span lang="EN-GB" style="mso-ansi-language: EN-GB;"> to the
PostgreSQL database. I wrote a small shared library abstracting the metrics
database queries (and another for general database stuff), so now writing the
values is very simple. This shared library could be used for writing other
data, too. Like the temperatures and energy amounts I mentioned above. I should
probably add better error handling, so that lost connection could be
automatically retried without interaction from the code using the library. But
anyway, here is how it is used:<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">var worker
= new PsqlWorker(dbConfig); // Lib1<br />
var client = new MetricsGenericClient(worker); //Lib2</span></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">var last_progress = /*client.get*/;<br />
// ...</span></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">
var id = await client.GetOrCreateMetricCachedAsync("destiny2.test." +
objectiveHash); // Result is cached in-memory after first call</span></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">if(progress != last_progress) // Compress data by dropping not-changed values</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> await client.SaveMetricAsync(id, progress);</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> last_progress = progress;</span></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br /></span></div>
<div class="MsoNormal">
<b><span lang="EN-US" style="mso-ansi-language: EN-US;">Visualizing
the data</span></b><span lang="EN-US" style="mso-ansi-language: EN-US;"> was next. I have been jealously eying Grafana dashboards for a long time,
but never had the time to set something up. There was one instance a few years
ago with Tracker3 where I stumbled around a bit with Netdata and Prometheus, but
that didn’t really stay. Now I made some quick research on Grafana, and
everything became clear.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;">Grafana is
just a tool to visualize data stored elsewhere. It supports multiple implementations
for that, and they each have slightly different use cases. I’m still not
exactly sure what kind of aggregation optimizations are possible when viewing
larger datasets at once, but I kinda just accepted that it doesn’t matter,
especially when most of the time I’d be viewing the most recent data. What I
also had to accept was that Grafana doesn’t automagically create the pretty
dashboards for me and that I’d have to see some effort there. But not too much.
Adding a graph is just a matter of writing a relatively simple SQL-query and
slapping in the time macro to the SELECT-clause. And then the graph just
appears. For visualizing the number of total kills with a weapon this was as
complicated as it would get. For counters displaying the current value it likewas
was just a matter of writing the SQL query with ORDER BY time DESC LIMIT 1.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;">And while I
was at it, I also added a metric for the duration of the API calls. I also remembered that Grafana supports annotations, which could also be saved to Postgres. And the
dashboard started to really look like something! Here there's one graph for "favourite" things and then one which just visualizes everything that is changing.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoVWBrFM7MGynrPnovueoaaQNPUsy0uzxCG_8Pe8pk7xW3ZtUDvApR8C3LOjejukv23EqkSott_gL4goQrNNVB50QH7d83LOcIWhUTuUGaxNb4Zj06ng6P1VUGFI4gc0VWfdm5T-jk1V4/s1600/d2_grafana.png" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoVWBrFM7MGynrPnovueoaaQNPUsy0uzxCG_8Pe8pk7xW3ZtUDvApR8C3LOjejukv23EqkSott_gL4goQrNNVB50QH7d83LOcIWhUTuUGaxNb4Zj06ng6P1VUGFI4gc0VWfdm5T-jk1V4/s1600/d2_grafana.png" /></a></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;">And why
stop there? I also installed Telegraf for collecting system metrics such as CPU
or RAM utilization or ping times. I went with the simplest approach of
installing InfluxDB for this data, as there were some ready-made dashboards for
this combination. More services, more numbers, more believable stack :S<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US" style="mso-ansi-language: EN-US;">* * *<o:p></o:p></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-US" style="mso-ansi-language: EN-US;">That’s it.
No fancy conclusions. See you next time. I’ve been using this system for only a
week or two now. Maybe in the future I have some kind of deeper analysis to
give. Maybe. And maybe I get to refine the account tracker a bit more, so that I
could consider maybe (again, maybe) opensourcing it.<o:p></o:p></span></div>
<br />
PS. These posts are probably not very helpful if you are trying to set up something like this yourself. Well, there's a reason. These are blog posts, not tutorials. I don't want to claim to know so much that I'd dare to create a tutorial. Although... some tutorials are <i>very bad</i>, I'm sure I could do better than those.Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-76140297725902228422019-11-10T22:50:00.001+02:002019-12-18T11:39:12.934+02:00An update: a brave new world is coming?<br />
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><b>Hmm.</b>
Starting to write this post is surprisingly hard. Despite the fact that I’ve
already written <i>one</i> blog post earlier this year. Well, anyway… Now that
I’m finally graduating (more about that on a later post!11), I’ve had some
extra time to spend on productive things and re-evaluate my focus. And what an exciting turn of events
has it been!<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">I’ve
already spent one weekend on briefly investigating the state of serverless computing.
This netted me an instance of Node-Red, but also MQTT, NATS, MongoDB and Redis
(even though I already had one) to toy around a bit. All these are running on
top of Docker Compose on a new, previously under-utilized incarnation of Usva.
Docker Compose is actually a somewhat new tool for me. I’ve meant to look into
it (or Kubernetes) for quite a while, but never got around to it. That is,
until I made it a part of my thesis. Now this research already paid back –
setting up all those services using Compose was a breeze!<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Testing out
Node-Red actually kicked off a larger process. It demonstrated that utilizing a
range of ready-made tools isn’t actually all that bad and that there actually
seems to be a lot of tools now that suit my way of doing things. And well, now
that I’ve somewhat reinvented all of the software involved at least once, I can
finally get to actually using existing ones properly :p I'll have to postpone Tracker once again while I discover this new brave world.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">Node-Red
offers most of all a nice platform to host short pieces of code. There is a web
UI for coding and testing, and after that the code remains on the instance and
keeps on executing. No need to worry about setting up routing or cronjobs or
any of that! There are also some plugins for using stuff like databases and message
queues with minimal work. While Node-Red doesn’t seem very scalable on anything remotely
complicated (see a price tracker below), it did manage to verify the concept of
serverless for me. And tinkering with it was actually quite fun and easy!<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEjp8mtsfgL1VdotUhskWbkKT_s-xu-XLb9YgNfvQGyAp1gl9peEIj056CqcJ7EdYKUTGnj1wFi7Rrk9Yl4cmhfZ_ysz9qcrBAtgP0hq5XXDsEU8RTPKcX4lzTCkQ-HBmdwFMi11PBelU/s1600/ifR.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="581" data-original-width="1379" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEjp8mtsfgL1VdotUhskWbkKT_s-xu-XLb9YgNfvQGyAp1gl9peEIj056CqcJ7EdYKUTGnj1wFi7Rrk9Yl4cmhfZ_ysz9qcrBAtgP0hq5XXDsEU8RTPKcX4lzTCkQ-HBmdwFMi11PBelU/s1600/ifR.png" /></a></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">* * *</span></div>
<div align="center" class="MsoNormal" style="text-align: center;">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">While a bit
unrelated, as a part of this I also began experimenting on using events and message
queues for communication on a larger scale than before. This shouldn’t come as
a surprise though; in one iteration of Tracker I already began heading this
way. And this kind of indirect messaging actually also plays a role with IoT
devices – directly addressing them might not be possible due to how they have
been networked. But why is this important? For two somewhat equally important
reasons. The first is that for the longest time I’ve been in great pain due to
how hard service discovery can be. But if services subscribe to queues instead
of needing direct addressing, service discovery gets almost completely eliminated.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">And second,
it is because IoT is relevant now! I backed <a href="https://www.kickstarter.com/projects/meadow/meadow-full-stack-net-standard-iot-platform">Meadow F7 on Kickstarter</a> during the
spring; it is a full .NET Standard 2.0 compatible embedded platform in Adafruit
Feather form-factor, with a possible battery life of about up to two years. C#
and low power in one small package!!! It even has integrated Wi-Fi and
Bluetooth. I should be getting it in just a few weeks. Finally! But I guess
more on that on a later post.<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">So, what
does this all mean? With service discovery being taken care of by a message
broker, and hosting solved by serverless and docker-compose (at least in theory),
there comes an unexpected ability to <b>realistically develop microservices</b>
and other lighter pieces of code. And possessing some easily programmable embedded
devices opens up a way for a whole new level of interactions. Though first I have to
replace Node-Red with a proper serverless solution… Well, not <i>have</i> to, but...<o:p></o:p></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;"><br /></span></div>
<div class="MsoNormal">
<span lang="EN-GB" style="mso-ansi-language: EN-GB;">I already
have plenty of development ideas, but I guess they too will have to wait for another
post!<o:p></o:p></span></div>
<br />Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-83303008951960894922019-04-12T14:42:00.002+03:002019-04-12T14:42:33.998+03:00Trying something newLet's try something new. Or actually, something old.<br />
<br />
What I mean by this, is that it is once again time to do some self-reflection. But this time I'd like to do it in finnish, as that is my native language and the language of most of my thought process. It should thereby be a lot easier to unload my thoughts with great precision as there is no translation overhead nor thinking in two languages simultaneously. Although, as I might have previously mentioned, another language helps to look at the thing more objectively. Anyway, it is low(er) effort, just what I need now. And there is the potential for feeling good of the end result.<br />
<br />
This type of deeper reflection poses a great question, though. Is this something I dare to publish? Things are eternal in the internet; no backsies. And yet: once published, don't I just hope this gets read?<br />
<br />
This was written at 2 am. Try to enjoy.<br />
<br />
<div style="text-align: center;">
* * *</div>
<br />
Minä olen ihan helvetin kyllästynyt siihen, että en nykyään jaksa oikein tehdä mitään. Öisin ei uni tule ja päivällä väsyttä. Mieli olisi kuitenkin täynnä kaikkea siistiä ja hienoa mitä haluttaisi tehdä, mutta kaikki jää kuitenkin toteuttamatta. Joko mielenkiinto vain menee saman tien kun yrittää toteuttaa, tai sitten ei vaan fyysisesti jaksa.<br />
<br />
Oli kyse sitten TV:n katsomisesta, tarinallisten pelien pelaamisesta, liikunnasta tai luovasta toiminnasta; lopputulos on sama. Ei mitään. Parempiakin päiviä toki on, mutta ne tuntuvat olevan aina vain harvinaisempia ja harvinaisempia. Olen kärsinyt tästä ongelmasta nyt varmaankin kymmenkunta vuotta, mutta vasta viime vuosina se on edennyt sen verran pahaksi, että havahduin sen oikeasti huomaamaan. Ja hakemaan apua. Ikävä kyllä tämä on sellaisia asioita, joissa mitään pikaratkaisua ei ole - ainakaan pidemmällä aikavälillä.<br />
<br />
Tai sit olen vaan ihan helvetin laiska.<br />
<br />
Kirjoitan tästä asiasta, koska kyllä näistäkin asioista pitäisi voida puhua. Edes jollekin. Toisaalta yritän tässä hieman taas vaihtaa lähestymistapaa ja saada edes jotain aikaan. Minun on nimittäin pitänyt jo pitkään saada käyntiin - kaiken muun lisäksi - oma varsin vapaamuotoinen vlogi eli videopäiväkirja. Mutta enpä ole siihenkään pystynyt, ja se harmittaa. Taustoja vlogin perustamishalulle on monia. Erinomainen aihe vaikkapa aloitusjaksoksi, siis. Mutta ei jaksa. Melkoinen muna-kana-ongelma. Tämä teksti yrittää osaltaan olla eräänlainen kiertotie asialle.<br />
<br />
<div style="text-align: center;">
* * *</div>
<br />
Vlogi, miksi? Syitä en osaa asettaa tärkeysjärjestykseenkään, mutta tässä tälläinen puoliksi auki pureskeltu pohdinta:<br />
<br />
Innostuin pari vuotta sitten "travel film" tyyppisestä Youtube-sisällöstä. Verrattain lyhyitä kiiltokuvamaisia tarinoita matkailusta. Kantavana teemana ennen kaikkea matkan hauskat kokemukset ja se mitä tuntemuksia matka herätti videon tekijässä siinä hetkessä. Näille videoille ehkä hieman omainen carpe diem -asenne oli minulle jollain tavalla pysäyttävä. Kuten aimmin Life is Strange -taustaisesti avauduin, hetkestä nauttiminen kun on ollut minulle aina varsin haastavaa. Ehkä voisin jotenkin kyetä emuloimaan tuota tunnetta jos tekisin itsekin vastaavia videoita? Lopulta ehkä oppiakin nauttimaan?<br />
<br />
Toisaalta videoiden tekeminen olisi varsin erinomainen tilaisuus harjoitella esiintymistä, tai vielä tärkeämmin ennen kaikkea itsensä sanallista ja eleellistä ilmaisua. Ne, jotka minut tuntevat ovat luultavasti huomanneet sen, kuinka sekään ei ole minulle kovin helppoa. (Kommentoikaa toki, vaikka anonyymisti. Nettisivuilta löytyy yhteydenottolomake, joka ainakin luultavasti toimii. Tai jos on muutakin.) Kun kameralle koittaisi selittää fiiliksiään, niin niitä olisi pakko miettiä ja muodostaa kaikesta tapahtunesta mielipide, parhaassa tapauksessa kesken tapahtuman. Tämän jälkeen tapahtumaa voisi ihan eri tavalla havainnoidakin ja oikeasti yrittää uppotua siihen siinä hetkessä. Kokonaisvaltaisempi kokeminen rikastuttaa elämää ja tuottaa iloa ihan eri tavalla. Tai näin on ainakin hypoteesi.<br />
<br />
Videoiden tekeminen on toki myös aivan oma maailmansa. Se on jotain erilaista kuin se ohjelmoinnin ja videopelien maailma missä yleensä kuljen. Mutta kuitenkin jollain tavalla tuttua. Tekeminen on toki laaja termi. Siihen kuuluu ainakin editointia, kuvaamista ja jopa suunnittelua. Kaikki näistä asioita, joissa voisi myös kehittyä ja oppia paljon uutta. Editointi on luovaa tekemistä, joskin ikävän työlästä. Kuvaaminen puolestaan - ainakin kuten itse näen asian - on erityisen hieno asia. Sillä kuten selostaminen, niin myös kuvaaminen pakottaa tarkastelemaan senhetkistä tilannetta eri kulmista (sanan kaikissa merkityksissä :p) ja ehkä arvostamaan tapahtuvaa. Jopa verrattain yksinkertaisetkin asiat voivat tuottaa suurta iloa kun niihin keskittyy. Viimeiseksi mainitsin suunnitelmallisuuden. Siinä minulla olisi kehitettävää, joskin hieman eri tavalla kuin ensiksi arvaa. En ole millään tavalla spontaani ja olen mukavuusalueellani kun saan miettiä tulevaa rauhassa. Mietin kaikkea kuitenkin varsin käytännönläheisesti, enkä tunteella. Kuinka ottaa se mukaan suunnitteluun? Ajatella, millaisia tuntemuksia voisin asioista kokea. Kun asetelmaa vie vielä pidemmälle se kääntyy, ja ollaan yllättävän syvien kysymysten äärellä. Mitä tehdä, jotta tuntisin haluamiani tunteita. Mistä oikeasti pidän? Mikä minut tekee onnelliseksi?<br />
<br />
Vlogaaminen on siis ehkä paitsi uusi laaja harrastusmahdollisuus, niin myös suurta potentiaalia omaava työkalu. Mutta se on myös muuta. Uskallan sanoa, että eräs ihmisen perustarve on tulla huomatuksi. Tämä blogi - ja jatkossa ehkä myös vlogi - ovat minun tapani huutaa tyhjyyteen, että minä olen olemassa. Minulla on myös vahva usko siitä, että voisin oikeasti tehdä laadukasta sisältöä, josta on ihmisille hyötyä. Joko viihteenä, tai asiasisältönä. Mutta olenko valmis tekemään työtä sen eteen, että pääsen sinne asti? Ja mikä riittää? Ehkä on vain parempi olla miettimättä asiaa.<br />
<br />
<div style="text-align: center;">
* * *</div>
<br />
Korostettakoon tässä suuren avautumisen vanavedessä nimittäin vielä tuota aiempaa lyhyttä tekstiä täydellisyyden tavoittelusta. Se on aivan helvetin vaikeaa, mutta jostain syystä siihen vain silti pyrkii. Ja kun sitä ei vaan voi saavuttaa, ja on niin kovin helppo menettää motivaatio. Oli kyse sitten ohjelmoinnista tai videoiden tekemisestä, tai jostain muusta. Miksi yrittää jotain, kun tietää, että sen voisi tehdä aina vielä vähän paremmin. Miksi julkaista mitään työn alle saatua, kun tiedossa olisi vielä loputon kasa parannuksia. Miten tyytyä hyvään? Tai jopa keskinkertaiseen? Miten uskotella itselleen, että keskinkertainenkin voi kelvata? Miten hyväksyä se, että aina on kriitikoita?<br />
<br />
Kuinka heittäytyä ja antaa ei-täydellisille, mutta realistisille ja aidoille tuotoksilleen mahdollisuus onnistua sellaisina kuin ne ovat? Kuinka tuntea ylpeyttä niistä ja niiden tekemisestä? Mikä on lopulta arvokasta?Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-86492049465072658202018-12-09T09:09:00.001+02:002019-12-23T14:57:27.507+02:00About Tracker, and being perfect<b>How hard</b> can it be to just make a simple service to track a phone? Not that hard. Unless you want to be a perfectionist and overengineer the whole thing, several times over. And that is why I never seem to get anywhere close to finishing Tracker.<br />
<br />
In the last five or so years I've learned a lot about web services and would of course like to apply ALL that knowledge somewhere. A hobby project seems like the perfect place!<br />
<br />
<ul>
<li>Code generation and custom tooling: avoiding repeative tasks by writing ten times as much code to maintain. And then there is this one thing that was not in the original specification and you'd basicly have to rewrite the whole thing.</li>
<li>High availability and fault tolerance: considerable additional design and implementation complexity. Good luck getting anything ready when working on the project just some occasional weekends.</li>
<li>Metrics: a whole universe in itself to set up, integrate and maintain.</li>
<li>Devops, service orchestration and distributed configuration management: yet another whole universe, with yet more design and implementation complexity. As if you even get to this point. Sure you can devops your stub services, but that if anything is wasted time.</li>
<li>Distributed tracing: better reinvent the wheel again! That way it works just the way I want.</li>
<li>Extremely high performance: each small part is tuned for the best achiveable performance before integrating it to the larger system, and never finishing anything.</li>
</ul>
And of course everything needs to be <b>P E R F E C T</b>!<br />
<br />
What good is a <thing> if it doesn't function as well as theoretically possible? This intra-process message queue can only pass 300 000 messages per second?? It should be able to do at least 500 000!! Better spend a week trying to optimize it before moving on to the next thing! A decryption routine that can only decrypt 20 000 packets per second? Needs to be able to do at least twenty times as much! Better to just give up or spend ages looking for a faster implementation.</thing><br />
<br />
How about this config file? Why it is on the file system? How to make it seamlessly work with multiple environments with different configs. What if those configs change dynamically? Better get to work already, there is much to do!<br />
<br />
Hey, see that single point of failure? You'd better engineer it away! What failure point? I need to first map the whole system and get realtime metrics from every single thing it does, or doesn't.<br />
<br />
And hey, see that middleware solution for solving that one problem you have? Better not use it and write my own. That way I learn so much more about the problem domain and can leave out unnecessary parts for blazing fast performance.<br />
<br />
I'd like to have public test server for this? I wonder how I can set that up automatically. And better make sure that it does database migrations and all that automatically.<br />
<br />
And did you know that you could run your code inside a custom operating system to save on context switching time with networking?<br />
<br />
Oh, and there is this web application, too. What if I used modern JavaScript this time? And then the mobile app needs to be rewritten, too.<br />
<br />
And what about...!<br />
<br />
/dies<br />
<br />Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0tag:blogger.com,1999:blog-7073976836848498167.post-27029166689861508982018-12-09T08:17:00.001+02:002018-12-09T08:19:40.385+02:00A new phone :O<b>Times really are changing.</b> After about two years of actively looking for a replacement for my trusty - yet slowly rotting - Nexus 5, I finally think I found a suitable model! The Samsung Galaxy S9. 4k60 OIS video, USB-PD charging with USB 3.0 and dual SIM support. And a notification LED. Eat that, Nokia! A whole lineup of attractive phones, yet no LED. Why do you do this?<br />
<br />
Unfortunately the Galaxy S9 isn't everything I had hoped for. I found a large amount of things I dislike in just a few hours, sorted roughly by category. I sincerely hope that I can cross out these pain-points as time goes on and I adjust to them, or find <i>good </i>workarounds. If not: fuck.<br />
<br />
Physical:
<ul>
<li>It is a lot harder to pick the phone up from a flat surface.</li>
<li>The surface of the phone is extremely slippery. I wouldn't dare to try to hold it with woolen gloves, for example.</li>
<li>Extra care and thought must be put to holding the phone; the edges of the screen pick up unintended touches rather easily.</li>
<li>The phone has a very small chin, and this makes it hard to press the back-button when using it with just one hand.</li>
<li>On an otherwise smooth phone there are disturbingly rough edges on the back-edges with the charging port, and the one opposite of it.</li>
</ul>
<br />
<br />
Hardware:
<ul>
<li>The camera needs a really bright environment to work well at 60 fps. Lit indoors are just not enough.</li>
<li>Charging is limited to 10 W even though the phone supports USB-PD, which allows for 100 W.</li>
</ul>
<br />
<br />
Software:
<ul>
<li>Bixby can't be disabled without a Samsung account.</li>
<li>No "Development shortcut" option in developer options for easily killing and uninstalling apps. No any shortcut to kill apps!</li>
<li>Appinfo doesn't show the package name</li>
<li>Screen turns on when charging with no way to disable this.</li>
<ul>
<li>A workaround is to install SnooZy Charger. The screen still turns on, but then immediately off. But it will also turn the screen off even when it was already ON prior to charging/ending charging :(</li>
</ul>
<li>Bloaty software that can't be uninstalled, some not even disabled.</li>
<li>No option to disable charging LED notification.</li>
<li>Annyoing popup to buy Samsung Secure Wi-fi VPN service when connecting to new Wi-Fi networks.</li>
<li>I haven't (yet?) rooted the phone, so it is impossible to copy the settings and other saved data from some applications to the new phone. But this point applies to all other phones, too.</li>
<li>Rooing will disable Knox, a type of TPM. I see no reason for this. With rooting the TPM would be an even more important component, allowing to store cryptographic secrets while preventing them to be extracted from the device.</li>
<li>With fingerprint unlocking enabled, the phone will randomly ask for the password. No setting to make fingerprint work 100% of the time.</li>
</ul>
<br />
<br />
I guess that is everything for now. Why did I even get the new phone if I hate it so much? That is a valid question. The biggest reason is of course the fact that the older phone was becoming unuseable. The same thing that happened when I had to upgrade my otherwise perfectly fine 4790k-based desktop PC oto 8700k. That was actually even worse, as the performance gains were only marginal.<br />
<br />
With this new phone I have high hopes that the greatly increased computing power shortens iteration times with mobile application developement when I eventually get back to doing it. And now I can once again use the phone for fun things, instead of just swearing it for being so slow. But I wonder what those fun things are. Is there other things to do with a phone other than just browse reddit? The greatly upgraded camera will hopefully also enable a whole new class of things to do.<br />
<br />
And USB-PD support will make it easier to charge the phone when traveling light - without needing other extra investments - as it allows to charge the phone with the same charger I use for charging my laptop (Dell XPS 13). I could charge the phone via the laptop, but the USB outputs of the laptop work unreliably when it is turned off; can't risk it. But travel battery charging setup would be a whole topic on its own, so let's not get into it now.<br />
<br />
In conclusion: I loved Nexus, can S9 get anywhere near it? That is what I'm hoping for; time will tell.Vazdehttp://www.blogger.com/profile/15908660641237632061noreply@blogger.com0