In this post I discuss solid ways to revitalize and in some cases remove serious problems to existing user interfaces typically built on a thick client architecture without the need for a complete re-write which may not be cost-effective nor within realistic timeframes.
Every technology leader knows that when a new initiative begins, the selection of the technology is already obsolete. In the real world we cannot wait for the next big technology framework, we must assess the current technology even if it is nascent, whether it will be here for the long haul and who is backing the technology.
10 years ago when bandwidth speeds were magnitudes slower than they are today, screens had sensible resolutions and the cloud was significantly less mature than it is now, selections were made on the approach to implementing a particular system. Those systems may now be coming up for an overhaul.
The selections made were current and potentially leading edge at the time but now other solutions exist. What happens to those existing systems with potentially millions of dollars of research and development embedded in their design. Do you throw all that away and start a brand new development initiative to move to a new technology stack or do you focus on what is really required from a user point of view?
The Current Technology Is Now Obsolete
At the time when the technology stack was selected and the infrastructure to host or provide the service being offered were considered, decisions were made accordingly. There may have been a need to have large amounts of information at a user’s fingertips, a UI with lots of information packed onto it and built to have offline capability. Systems were designed to maybe synchronize large datasets locally for near instantaneous slicing and dicing. The UI needed to update in near real time.
The CTO together with the technology team, select the most appropriate technology based upon the available offerings both open and closed source. The CTO and technology heads would also need to consider the already utilized technologies in house and select one of those to avoid fragmentation of the technology stack which in itself could lead to a mess of interconnected systems (microservices and loosely coupled systems yet to emerge).
So for example, the selection was made to develop the software as a thick client which could utilize the full resources of the client system and have access to essential services like filesystems, graphics hardware et al.
Caring About The Underlying Technology
If you ask a user of a system whether they care that the system is written in a particular language, utilizes a particular relational database management system or works with multiple operating systems, the likely reply is that it just needs to work. So using the age old adage of “if it isn’t broken don’t fix it” and given finite resources and opportunity costs, we continue to develop and improve those systems even if the framework or architecture has been surpassed. If the product is making good money a technology rewrite may be justified but what if this is a marginal product?
Re-Engineer The Things That Matter
The user requires that the system just work. As a technologist we care about the long term support of the system with all the moving parts that are needed to make it function. Slowly upgrading the backend infrastructure (for example the database management system or the operating system) is something that is minimal effort generally and keeps the platform safe and secures its future. But what about the user of the system?
With the advent of modern high definition screens where laptops now can have 4K screens, the legacy systems did not contemplate such a massive growth in screen resolution. As humans, we can only view so much detail especially on screens the size of laptops so we ‘zoom’ the content. For example viewing at 200% to have a 4k screen render with sizing equivalent to a full HD screen.
It is interesting to note that if you keep a laptop 4K screen in its native resolution, you can see XX rows and YY columns of a spreadsheet running in Microsoft Excel but only with the aid of a magnifying glass. So we zoom to 200%. However, there are a number of graphic libraries which do not scale well or user interfaces become distorted if the zoom is not 100%. Operating system providers have done their best to mitigate the problem but if your original UI was pixel based or does not have in-built transparent scaling then you can run up against serious problems.
The end user has 2 choices with ever increasing screen resolutions so the UI can cope. Zoom more or adopt a non-native display resolution. In some cases only the adoption of a non-native display resolution is appropriate. However, this adopting means that more modern systems suffer in terms of quality and detail.
So we technologists look for alternatives. Is there a more up to date version of the graphic system or a different graphics system, similar so that the work to re-engineer the user interface is not a herculean task associated with a complete re-write? A good example is to consider a thick client written in a legacy graphic system like Microsoft WinForms. A natural progression would be to take it to a more modern graphics library like Windows Presentation Foundation (WPF). But questions like “is there the expertise?”, “is WPF still supported?” While it solves the immediate need in the opinion of this writer, it doesn’t go far enough.
Is There A Future Proof Baseline?
For those unaware, the browser technology is a highly optimized system for rendering quality graphics on any resolution. It is a sandboxed system so that one tab of a browser does not directly affect the other open tabs (how many times have you had a website become unresponsive but your other tabs are working just fine). Google went a stage further and open sourced the project called Chromium which is the ‘browser window’ i.e. the window that displays the web page content. Over the years, Microsoft Edge now features Chromium as do other browsers like Opera. More importantly for us technologists, the Chromium project can be encapsulated within libraries that can be loaded into our legacy environments. In the Microsoft world it is called WebView2 and projects exist migrating this into Java (Java Chromium Embedded Framework) and other environments. Electron is a notable example of a cross platform system utilizing this technology and Microsoft MAUI while still new promises UI across mobile and desktops including Linux and Apple operating systems but we are dealing with legacy systems in this post.
Using Web In Legacy
The ability to use the framework of Chromium means that we can now get HTML/CSS designers to design a modern user interface and ‘clip’ it into an existing legacy system. There are a few things to consider but these are specific to the product you are looking to refresh from a UI perspective rather than a technology problem.
How It Works
The idea is to dock the web library (control if you prefer) into a window in the legacy system and unclip the existing UI in favor of a UI hosted via the web library. This opens up the world of web design. Since the HTML/CSS can implement the same controls as more traditional UI frameworks and more, the refresh of the UI can be achieved. So think a web page but hosted in a window in the legacy application. It is sized correctly to be a comfortable transition by the existing user base, can make the best use of space and does not need to be ‘mobile aware’ since the system is running in an existing environment.
Let’s look at the possible solutions for the incorporation of this technology. Which use case does your application fall into?
Option 1: If we developed it now it could be completely or predominantly browser based
This option really refers to applications where predominantly trips back and forth to a data source are more than the processing performed locally. This option is for applications that simply serve up content from the data source without a whole lot of processing on the client. If it went fully browser based, there may be a change to the working practice of the tried and true system and there may be problems especially if the system requires direct access to hardware but for the majority of the system it could be hosted.
Option 2: There is too much processing locally for it to be a viable browser solution
This option is for legacy systems that incorporate a large local processing component. These can still benefit from integration of this technology the approach needs to be slightly different.
Implementing The Options
Implementing Option 1
In order to implement option 1, the new UI component ultimately will be shunted onto a server side web based technology. This is because predominantly, the page is really a view to the data stored on some data source. So why not develop the whole UI in a modern server side architecture and host the page in the web library and send the information to the legacy system to deliver the output. If the output is saved on the server in the original legacy application, the web pages can consume this content directly once created. This is best illustrated by the flow below:
Implementing Option 2
Option 2 due to a large amount of processing locally benefits from local content creation. The new UI is maintained locally so that processing can be fine-tuned. Remember that hosting the new UI in the legacy application or even the local filesystem allows complete manipulation. As long as the manipulation of the HTML by the legacy application produces valid HTML, the control doesn’t care. Since there are no server side trips, this can produce a highly responsive UI.
Mixing and Matching
You may decide that what would want to do is mix and match both approaches. For example in its simplest form, pages are displayed to the user from a webserver URL. Now let’s say processing begins and you need to update the progress via say a progress bar. It would be better to manipulate a progress bar locally without a trip to the web server to update the progress and only when the processing is complete, redirect to a new page from the web server.
The choice of webserver hosting or local hosting all depends on whether your legacy application does a lot of local processing and manipulation of the UI based on local results. By utilizing local content and re-writing the UI HTML pages on the fly you can construct a highly responsive interface. Conversely if the UI is simply collecting user information and then passing to an engine for example, then there may be benefits to hosting all the of pages on a web server and simply passing the selections to the legacy system and redirecting to another page on completion.
While a complete re-write is the most optimal solution and I am not advocating one approach over another discussed within this post, it is good to be aware of alternatives and depending on the degree of local processing needed, interim solutions like the alternatives highlighted in this post become real opportunities to providing a cost-effective long term solution to the UI refresh/revamp problem. Afterall, if dollars can be saved refreshing the UI but the underlying business intelligence representing potentially millions of dollars of investment over the years can be saved shouldn’t it at least be considered certainly in the short to medium term?
All trademarks are the property of their respective owners.