Just a small reminder for you that yet another major release has been thrown our direction: v17 aka “Business Central 2020 Release Wave 2“. Old news, I know. But I blame the pandemic ;-).
And there are a few features that I really would like to emphasize, because I think they didn’t get too much attention before – and some of them came as a surprise to me:
New TableType property Now you will be actually make sure certain tables are used as temporary tables. Cool!
Using Partial Records A way to get rid of always reading all fields – which also will have a positive impact on table extensions as well – remember James Crowter’s blog? Now it’ll be a matter of getting these best practices to the common development principles of all developers.. .
Using Key Vault Secrets in Business Central Extensions Interesting! Especially when you have secrets that you need to manage over multiple extensions – you can now set it up in the app.json, let all tenants, customers, apps, whatever, point to one or more keyvaults, and manage the secrets centrally. Super!
New system fields: Data audit fields Finally we have “Created At/By” fields and “Modified At/By” fields out-of-the-box for every table/record! Managed by the system. This appeared to be already in my list on my previous blog – but for some reason, I didn’t catch it as being interesting . But it sure is, in fact!
You might wonder, what is Microsoft doing to announce and present this new release? Well, like in Spring, we’ll have yet another…
Virtual Conference
Makes sense, obviously! But in some way, I couldn’t really find a lot of attention to it. I asked around in my company, and nobody really knew that it was happening. And not only that – it’s happening soon! October 21st! So be fast, and make sure to register (for free) here: http://aka.ms/MSDyn365BCLaunchEvent And this is the agenda:
Erik Hougaard
I’d like to conclude with the video of Erik Hougaard about the “what’s new with AL in v17”. An interesting approach to find out about new features ;-).
If you’re not subscribed to his channel yet – well – it’s about time ;-).
As said, this was going to be a small announcement – enjoy this new release! I’m already enjoying it, by fixing all the next CodeCop rules in the compiler (and that’s not just the “with”-stuff) .. resulting in about +4500 changed files.. what can I say .. .
Recently, we have been going through upgrading our 65 apps to the newest release (v17). You might wonder: upgrade? Wasn’t this supposed to be seemless?
Well, let me explain what we did, how we handle stuff internally – and then maybe it does make sense to you why we take the steps we took for upgrades like this.
DevOps
It seems I can’t stop talking about DevOps ;-). Well, it’s the one thing that “keeps us safe” regarding:
Code quality
Succeeding tests
Breaking changes
Conflicting number ranges
… (and SO MUCH MORE)
You have to understand, in a big development team, you can’t just set it on hold, you can’t just let everyone contribute to the same issue (in this case: preparing your code for v17), you will probably continue development while somebody is prepping the code for the next version.
That’s where branching comes in (we just created a “v17prep” branch for every repo). Sure, we continued development, but once in a while, we basically just merged the master-version into the v17prep branch.
Now, with our DevOps pipelines, we want to preserve code quality, and some of the tools we use for that, are the code analyzers that are provided by Microsoft. Basically: we don’t accept codecop-warnings. So, we try to keep the code as clean as possible, partly by complying with the CodeCops (most of them) that Microsoft comes up with. The pipelines basically fails from the moment there is a warning.. .
I absolutely love this. It is a way to keep the code cleaner than we were able to with C/SIDE. And the developer is perfectly able to act on these failures, quickly and easily, because they get feedback from the pipelines.
But – it comes with challenges as well:
CodeCops are added by Microsoft with new compilers in VSCode. They are automatically being installed in VSCode. So it could very well happen that on a given morning, lots of new failures pop up your development environment.
CodeCops are added in new versions of BC – so pipelines begin to fail from the moment pipelines are run against a higher version. Since we are upgrading … you feel what will happen, right? ;-).
Next, obviously, we have “automated testing” also deeply rooted in our DevOps: not a single PullRequest is able to be merged with the stable branch, if not all tests have run successfully. When implementing upgrades, I can promise you, there will be breaking tests (tests will simply fail for numerous reasons – Microsoft changed behaviour, your tests fails) – and if not, well, may be you didn’t have enough tests yet ;-): the more tests you have, the more likely one will fail because of an upgrade. And that’s totally ok! Really. This is one of the reasons why we have tests, right? To know whether an upgrade was successful? So, absolutely something the pipeline will help us with during an upgrade process!
Yet another check, and definitely not less important: the “breaking change” check. The check that prevents us to allow any code that is breaking against a previous version of our app. It’s easy:
We download the previous version of the app from the previous successful (and releasable) CI Pipeline.
We install it in our pipeline on a docker container
Then we compile and install the new version of our app
If this works: all is well, if not, probably it’s because of a breaking change which we need to fix (Tip: using the “force”, is NOT a fix .. It’s causing deployment problems that you want to manually manage, trust me: don’t build in a default “force deploy” in a Release Pipeline ending up with unmanaged data loss sooner or later).
That’s the breaking change– but do know that in that same pipeline, we also run tests. And in order to do that, we need a container that has my app and my test app in it. Andin order to do THAT, we need all dependent apps in there as well. So, we always:
Download all dependent apps from other pipelines – again the previously successful CI Pipeline of the corresponding app.
Then install all of them so our new app can be installed having all dependencies in place
If this doesn’t work: that’s probably a breaking dependency, which we’ll have to fix in the dependent app.
A breaking dependency is rather cumbersome to fix:
First create a new pullrequest that fixes the dependent app
Wait for it to run through a CI pipeline so you have a new build that you can use in all apps that has this one as a dependency
The app with the dependency can pick it up in its pipeline
So in other words: it’s a matter of running the pipelines in a specific order, before all apps are again back on track. It’s a lot of manually starting pipelines, waiting, fixing, redo, …
I’m not saying there are different ways to do this, like putting everything in one repository, one pipeline, .. (which also has its caveats), but having all apps in its own repository really works well for us:
It lets us handle all apps as individual apps
It prevents to make unintentional/unmanagedinterdependencies between apps
It lets us easily implement unit tests (test apps without being influenced by other apps being installed)
It notifies us from any breaking changes, breaking dependencies, forbidden (unmanaged) dependencies, …
Why am I telling you this? Well, because Microsoft broke a dependency– a classic breaking change, but not in the base app, but in testability .. acceptable by Microsoft because “it’s just a test-framework”, but quite frustrating and labor intensive when this needs to go through a series of DevOps pipelines. The broken dependency was a simple codeunit (Library Variables Storage) that Microsoft moved to a new app.
I get why they did it: this is more of a “system library” than a “function library”, and basically the system app needs to be able to get to this, which shouldn’t rely on anything “functional”. So architecturally, I totally understand the change. But .. It’s breaking “pur sang”, and I really hope things like this will be done using obsoletions in stead of just “moving” .. . I’ll explain later what it involved to handle this issue for v17.
Since we want to comply with all codecops, implement all new features with V17, I think I found a method that works for us to be able to work on it, spread over time.
The flow
So, DevOps (and SCM) is going to be the tool(s) that we will use to efficiently and safely tackle these problems.
Step 1 Create branch
I already mentioned this – all preparation can be done in a new branch. Just create a new branch from your stable branch (master?), in which you can do all your preparation jobs. When people are still adding features in the meantime, simply merge the branch again with the new commits from time to time – possibly adding new code that does not comply with the new version anymore – but that should easily be fixed.. .
Step 2 Disable the (new) codecops that cause problems (warning or error)
This step is in fact the time that you buy yourself. You make sure that you still comply with all rules you did not disable, but to start working on all rules that you don’t comply with yet, let’s first disable them, to later enable them one-by-one to have a clear focus when solving them. For us, this meant we added quite a bunch of codecops:
All of which we meant to fix. Some more efficiently than others .. . I wanted to comply with most of them.
Step 3 Make sure it compiles and publishes
It wasn’t “just” codecops that we needed to worry about. As said, there was also a breaking change: the “Library – Variable Storage” codeunit that moved to its own app. Now, lots of our test-apps make use of that codeunit, so we needed to add a dependency in all our test-apps to be able to “just” run our tests against the v17 dev environments:
Step 4 Enable codecop and fix
Up until this point, it only took us about 30 minutes: creating a v17 container, branch, disable codecops .. all find/replace so we efficiently did the same for all apps .. and we were good to go: we had apps (and its test-app) that did NOT show any warning in the problems-window, and that was able to be published to a default V17 container where we were able to run tests. Time to get busy! To solve the codecops, we simply applied this subflow:
Switch on a rule by removing it from the ruleset-file
Commit
Solve all warnings
Commit
Back to 1
And we did that until all rulesets we wanted to fix, were fixed.
Step 5 Pullrequest
From that moment, we started to pullrequest our apps to the master-branch, to perform our upgrade. Basically, I wanted to have all working pullrequest validation builds, before I started to approve them all to master branches. This was a very tricky thing to do ..well .. this was not possible to do, unfortunately.
Simply said: all apps with dependencies to apps that used the “Library – Variable Storage” codeunit, simply failed, because the dependencies were not there yet in the previous apps, so it simply was not able to deploy them in the pipeline to a b17 container for: Checking breaking changes or installing the dependent apps.
There is always a solution .. Since I don’t want to just abandon DevOps, this is the only way I saw possible:
Disable breaking changes in the yaml for this pullrequest. This is obviously not preferable, because despite the MANY changes I did, the pipeline is not going to make sure that I didn’t do any breaking changes.. . Fingers crossed.. .
Approve all apps one by one, bottom up (apps with no dependencies first). This way, I make sure there is going to be a master-version of my app available for a next app (WITH the right dependencies), that is dependent of my bottom layered app. So, I had to push 65 pullrequests in the right order. Big downside was that I only see the real pipeline-issues when the pipeline was finally able to download the updated dependent extension. So no way for me to prepare (I couldn’t just let 65 apps build overnight and have an overview in the morning – I could only build the ones that already had all its dependent apps updated with the new dependency to the “Library – Variable Storage” app), and I had to solve things like breaking tests “on the go”. This all makes it very inefficient and time consuming.. . I reported it to Microsoft, and it seems that it makes sense for them to also see test-apps as part of the product, and to not do breaking changes in them anymore either (although I understand this is extra work for them as well… so, fingers crossed).
Some numbers
The preparation took us about 3 days of work: we changed 1992 files of a total of 3804 files, spread over 65 apps. So you could say we touched 50% of the product in 5 days (now you see why I really wanted to have our breaking changes-check as well ;-)? The last step – the pullrequest – it took us an extra 2 days, which should have been just a matter of approving and fixing failing tests (only 5 tests out of 3305 failed after the conversion).
Any tips on how to solve the codecops?
Yes! But I’ll keep that for the next blogpost (this one is long enough already ).
If you’ve been following the latest and greatest from Microsoft Dynamics 365 Business Central, you must be aware about “what’s cooking in Microsoft’s Lab“. In short, Microsoft is working on a possibility to generate a DGML file for your extension that you’re compiling. A DGML file is basically a file that contains all code cross references. Remember “where used” .. well .. that! I can only recommend to watch Vincent‘s session “BCLE237 From the lab: What’s on the drawing board for Dynamics 365 Business Central” from Microsoft’s Virtual Launch Event. You’ll see that you’ll be able to generate an awesome graphical representation of your dependencies:
(sorry for the bad screenshot – please watch the video ;-))
After you have seen that session, you might wonder why I created my own “Dependency Graph”. Well .. you know .. I have been willing to do this for a very long time. Actually ever since I showed our dependency analysis, where we basically created a GraphVis representation of our C/AL Code .. a tool which I shared as well. That was working for C/AL, and I wanted to be able to show a dependency analysis based on the app.json files. Fairly easy to do .. in PowerShell. But .. we have a decent development environment now .. and I already did some minor things in an extension .. so why not …
Visualize app.json dependencies in VSCode using GraphViz
There is not much to explain, really. In my CRS AL Language Extension, I created a new command that you can find in the command palette:
This command will read all app.json files in your workspace (so this function is really useful in a Multi Root workspace) and create a .dot (graphviz) dependency file from it:
It’s a really simple, readable format. Now, in VSCode, there are extensions that let you build and preview this format. I liked the extension “Graphviz Interactive Preview“. If you have this extension installed, my command will automatically open the preview after generating the graph. You can also do that yourself by:
With something like this as a result:
Settings
I just figured that sometimes you might want to remove a prefix from the names, or not take Microsoft’s apps into account, or not show test-apps, or… . So I decided to create these settings:
CRS.DependencyGraph.IncludeTestApps: Whether to include all dependencies to test apps in the Dependency Graph.
CRS.DependencyGraph.ExcludeAppNames: List of apps you don’t want in the dependency graph.
CRS.DependencyGraph.ExcludePublishers: List of publishers you don’t want in the dependency graph.
CRS.DependencyGraph.RemovePrefix: Remove this prefix from the appname in the graph. Remark: this has no influence on the ‘Exclude AppNames’ setting.
So, with these settings:
You can make the above graph easily a bit more readable:
Now, to me, this graph makes all the sense in the world – because I know what these names mean. But please let it loose to your extensions and let me know what you think ;-).
Enjoy!
And I’m looking forward to the DGML abilities and what the community will do with that!
A few days ago, we upgraded our product from 17.0 to 17.1. We had been looking forward to this release, because we usually never release a product or customer on an RTM release of Business Central. So .. finally 17.1, finally we could continue to upgrade the product and start releasing this to customers before the end of the year. While this was just a minor upgrade .. things started to fail. We have quite some apps to build, and some of them had many failing tests.
All these failing tests for just a minor upgrade? That never happened before. What happened? Well …
From Microsoft Dynamics 365 Business Central 17.1, Microsoft enabled upcoming features
You are probably aware of the new “feature” in Business Central, called “Feature Management“. A new functionality that lets users enable upcoming features ahead of time. It indicates which features are enabled, when they would be released, and it gives you the possibility to enable them in a certain database (usually a demo or sandbox to test the feature). From 17.1, Microsoft enables 5 of these features out-of-the-box in the Cronus database that comes with the DVD or Docker Artifact.
Now, these features is business logic. So, by enabling them, you’re going to enable new business logic. New business logic in an upgraded database means: a difference in behavior. A difference in behavior in a database with a crapload of tests, usually means …
Failing tests
Exactly: DevOps will execute your tests against a new Cronus database where these features are enabled, so your tests will fail, your pullrequests will fail, … basically your developments will come to a halt :-). This needs immediate attention, because before being able to continue development/testing/.. this needs to get fixed.
My first focus was looking at the cause of these failing tests: “a changed database with new business logic, features that are actually still not really released, just part of the database as an upcoming feature. So, how can I change my docker image to be a correct one with correct enabled features??”. Or in other words: I was looking at docker to solve this problem.
And I was wrong….
It was actually a remark from Nikola Kukrika (Microsoft) on my twitter thread that made me look at it from a different angle. Sure, my tests fail because of the enabled features. But this is actually good and useful information: they tell you the current business logic is not compatible with the upcoming feature, and I should also indicate that in code by disabling the feature during the tests. Doing so, I actually also give myself a “todo-list” and a deadline: all disabled features need to get enabled (or in other words: I need to make my software compatible with the upcoming version of the business logic) – and even more: it will fail again from the moment the features are actually released. So you kind of get warned twice. Looking at it from this angle: you WANT these failing tests during an upgrade.
Luckily, disabling the tests wasn’t so difficult. This is what we did:
Quite honestly, I’m fully into the process of getting our apps (about 30 of them) to AppSource. We chose to have OnPrem implementations first, basically to get the experience, and also because it just still sells better in Belgium (immediate market ;-)). Anyway ..
Recently, there was a call with Microsoft with the topic “what to do in order to pass technical validation”. Given my current state – quite interesting to me, obviously ;-). It was a very good presentation from Freddy, with a clear overview of how Microsoft handled it in the past, how it’s handling it now .. and what to expect in the future.
What I was a bit surprised about was that still quite some partners just uploads a version of their app(s), and let Microsoft figure out what the problems are in terms of technical validation .. . And also that some partners had to wait for quite some time before getting feedback. Well, there was quite a clear explanation that one had to do with the other:
We are much faster in passing an app than in failing it.
And that’s normal. The validation is done by running a script. If the script passes, the validation passes. If the script fails .. Microsoft needs to find out why it fails and report that back – which is a manual process. Basically meaning: the more you check yourself, the faster your (and anyone else’s) validation experience will be!
What can you check yourself?
Well – basically everything that needs to be checked – the entire stack. Later, I will tell you how, but first, let’s see what’s so important to check – and apparently forgotten by a lot of people. Let’s start by this easy link: http://aka.ms/CheckBeforeYouSubmit . It ends up on Microsoft Docs explaining the details. During the call on Tuesday, Freddy highlighted a few common things:
AppSourceCop
You need to comply with every single rule in the AppSourceCop . Well – that’s what he told on Tuesday. Today, during a session on BCTechTalk – he corrected this. There is actually a ruleset that they apply when checking the apps, which you can find here (not sure how long the ruleset will be found in that link .. ). So, in short – enable it in your settings!
And if convenient for you, just apply the ruleset I mentioned (I don’t – we simply comply with every single AppSourceCop rule)
Breaking changes
When we think of breaking changes, we think of schema-related breaking changes. But that’s not all. In fact. AppSource-related breaking changes can also be:
A renamed procedure
A renamed or removed control on a page
…
So .. there are A LOT more breaking changes when we think in terms of AppSource. In fact, it’s important you make yourself familiar with the settings for the AppSourceCop (aka AppSourceCop.json). At minimum, you should refer to a previous version of your extension. And in order for the compiler to take that into account as well, also provide the app-file of that version. Here is an example of having the AppSourceCop.json (in the root of the app), the setting to my previous release, and the action released app in a folder on the project.
Note – for the VSCode compiler to work, you might have to copy that app in the symbols-folder. I just like to have it separate, so its intention is very clear (and as you can see, I .gitignore my entire symbols folder).
Affixes
In the screenshot above, you see I also set up an affix. It is important that you reserve your affix at Microsoft (it’s done by email – but I’m reluctant to share emails on a public platform like this .. but look at my resources below, in the documentation you’ll find what mailaddress to contact ;-)), and set it up into the AppSourceCop to make sure you don’t forget ANYTHING to prefix (about anything in extension objects, and about every new object). Small tip – my “CRS AL Language Extension” can help you with the prefixing ;-). So – set it up in the AppSourceCop.json, and the compiler will make sure you won’t forget!
Code Signing
This is a somewhat trickier thing to check. What do you need to know: for AppSource, you need to codesign your extension with a code signing certificate. The compiler will not sign your app, so this is usually done by a powershell script. But .. that’s not all .. what you should do is also test the resulting (signed) app by publishing and installing it without the “skipverification” check in the publish-commandlet. Don’t forget to check that, because that is the only way to be really sure the codesigning actually was successful!
Publish to all supported countries
Many partners “just” develop against one country localization (like “w1”). But when doing that, it doesn’t mean that your app will be able to deploy against all other countries. So you should also check if your app is able to be published (and even upgraded) against all these localizations that you want to support. PowerShell is your friend: set up a container for the localization, and start publishing your apps!
Name and publisher changes
When you change the name or publisher of your app – if I understood it correctly – at this point, it’s also a breaking change. That might change in the future though .. (only AppId should be leading). The way to check this, is to upgrade your app from your previous version to a new version (so basically: upgrading your app). From VSCode, this once again is difficult to do, other than running a script, which sets up an environment with your previous app, to be able to install to your new app.
How can I do my own validation?
So – a few are easy and configurable in VSCode. For others, you need scripts! Well, this is a part that is quite interesting for DevOps, if you ask me. Just imagine: a nightly pipeline that tests all the things (and more) above .. which reports back what the current status of your apps are. And no – you don’t need to create it yourself ;-).
BcContainerHelper
As always – it’s BcContainerHelper to the rescue. Freddy wouldn’t be Freddy to make your life as easy as at all possible. Since the very moment I wrote this post, Freddy released his a function “Run-AlValidation”– specifically meant for validating your app for AppSource. It’s quite a script – I tried to read through it .. wow! In general (and I’m being VERY general here – you can read the full explanation on Freddy’s blog ;-)), it will:
Loop through the countries you specified, and for each country
each version you specified
Set up a Docker container
Install the apps you depend on
Run AppSourceCop (basically: compile with AppSourceCop enabled)
Install previous version of your app to the environment
Upgrade to the new version of your app
As you see, it takes care of all the issues above. If all of this passes .. I think you’re quite good to go and upload your app to Partner Center ;-). As Freddy explains in his post, it’s still being changed and optimized – so frequently refresh your BcContainerHelper to get the latest and greatest!
DevOps fanciness with ALOps
You might not be using ALOps, still this section might be interesting, as it also explains some neat yaml-stuff
Quite some partners are a bit more demanding in terms of DevOps and development flow optimization. ALOps is one of the tools that make it easy.
So, I started thinking whether this was possible with the building blocks that are already present in ALOps – and taking advantage of the mechanics of DevOps. And of course it is ! Quite interesting, even!
Because ALOps is a set of DevOps-specific building blocks – it is actually quite possible to configure a set of yaml-files that does quite a good job in validating your code for AppSource. How it works is that one yaml-file will be there for the pipeline definition, which calls a yaml-template, which has the stages and steps.
Why two yaml files? Well, doing it like that, makes it possible to create a multistage-pipeline, depending on the countries and versions you set up in the (first) yaml: each combination of country and version would be a stage. And .. stages can run in parallel, so as such, this could tremendously speed up your validation process, when you have a pool with multiple agents ;-). When you have to validate against all countries, and 2 versions .. any performance gain is always welcome ;-).
The files, you can find among our examples here in the folder “AppSourceValidation”. You’ll find 2 files: a template, and a pipeline. Just put them both together in your repo (just use a subfolder “AppSourceValidation” or something like that), and in DevOps, refer to the pipeline-yaml when you create a pipeline.
You see it’s quite readable – and the “magic” (if I can call it that) are the two arays “countries” and “selectVersions”. Those will actually cause that multiple stages will be created when running this pipeline. In the template file, you’ll see the loop here:
Now, the individual steps are a different approach than what Freddy does in his pipeline: Freddy works with app-files, I work with your code. But you can see that we can actually create different templates – like one that would for example download artifacts from a release pipeline – or something like that. It’s quite simple to reconfigure, since all is just a matter of compiling code and publishing apps ;-). I might add other template files in here in the future! Just let me know if you’re interested in that.
The outcome is pretty neat. Here is a successful validation of 4 combinations, in a multistage pipeline:
More interesting, are errors. Just the default pipeline feedback gives you all you need:
And remember, these 4 stages could run in parallel, if you have 4 agents in one pool (which is not so uncommon …), all 4 would just run at the same time!
If we all now start validating our own apps, we would tremendously speed up the validation process at Microsoft. So, let’s just do that, will me? ;-).
Resources
Regarding AppSource, there are a few resources that I got on Tuesday that is worth sharing!
You might wonder: why would I need this? Why would I need to download source code of Business Central, while I can simply access it through the symbols when I’m working in VSCode – or even better, while I can simply click the symbol, and look at the code from there?
Well …
Searchability
Didn’t you ever wonder: “hey, previous version, this codeunit was still in this app – where is it now”? Or something in the line of: “Where can I find an example of a test-codeunit where they create and post picks”? In the old days, it was easy to get to the source code. It was simply an “export all”. These days – Microsoft’s source code is spread over a multitude of apps. Either as “Application” or as “platform” .. It doesn’t really matter.
Sometimes, it’s just very useful to simply be able to search trough every single letter of source code Microsoft has released as part of a certain version of Business Central. So …
PowerShell to the rescue!
I wrote a little script:
As you can see, I’m using BcContainerHelper to simply:
Download the artifacts and its platform
For each “.source.zip”-file, I’ll unpack it in a decent destination directory
You can apply “filters” or “excludes” when you’re for example only interested in a portion of the apps – just to speed up the process.
When done, you’ll have a directory (that you configured in the variable “$Destination“). Simply open the required version in VSCode, and you’ll be able to search all files.
As you see in the first line of the script .. you can indicate the exact version of BC by providing the right parameters in the “Get-BCArtifactUrl” CmdLet. More info here: Working with artifacts | Freddys blog May be one interesting example, you can also do something like this:
To get all Test-apps from the be, nl and w1 localization.
Now – this is just the tip of the iceberg of something that someone else in the community is working on (Stefan Maroń) – which is currently in approval state at Microsoft. Nothing to share just yet – but fingers crossed it’s going to get approved, and let the above just be a completely wasted internet space!
You might already have seen my session on DynamicsCon about Design Patterns. During that session, I mentioned a tool “mdAL”. If not .. well, here it is (starting at the mdAL part):
This tool has seen some major updates – time to blog about it. ;-).
Disclaimer
I am not the developer of this tool – so not a single credit should go my way. The main developer of this tool is Jonathan Neugebauer, a German genius who made this for his PhD, and who is putting this out there for free for us to benefit from. And I promise: we can benefit from this .. a lot! I just discovered this tool by chance (basically Jonathan contacted me through my blog for a review) – and since I have been working on the very similar thing 18 years ago (VB.Net, generating code from Rational Rose (UML) models – don’t ask .. ), I was immediately hooked and saw immense potential!
Model Driven Engineering
To understand the goal of the tool, you need to understand what is meant with “Model Driven engineering”: it is a methodology that focuses on creating and exploiting domain models (let’s call it “design patterns”), which are conceptual models of all the topics related to a specific problem.
Indeed .. I got that from Wikipedia ;-).
Let me try to explain this by means of some …
Examples to convince you that this might be more useful than you might think ..
1- We want to create master entities where we implement the default stuff, like number series. A lot of this code is based on a pre-existing “model” of how to implement No. Series. Just imagine this can be generated in minutes…
2- We have a posting routine, based on a master table and documents. So we need journal, ledger entries, register, document tables, … and the codeunits for the posting routine. Every time quite the same, but a bit different. What we did in the old days: copy “Resource Journal” and all its codeunits, and start renumbering and renaming ;-). Just imagine this can be generated in a few minutes…
Don’t imagine, just install “mdAL” and “do”!
As you have seen in the video, all this is possible today, in AL, in minutes. Jonathan’s ultimate goal was to have a model driven AL, which basically means: you don’t write code anymore, you basically model it in an understandable language, and the code is generated for you.
Today, the tool has had a major upgrade, and now you can spit out code, without the necessity to have a full model. Basically meaning: you can now use it also as a pure code generator. In my opinion, this makes its functionality even more useful for the community!
The tool can be found on the VSCode marketplace here: mdAL – Visual Studio Marketplace. Please take some time to read through the comprehensive documentation he has provided as well here: mdAL (mdal-lang.github.io). It’s not long, and it’s well worth the read! It explains how to install it, a quick start, the snippets (yes, it has snippets – it’s not even going to be minutes, it could be just seconds ), and so on.
Let’s see how to handle the above examples
1- just master tables
Consider this:
This is an mdal-file with a description of my master-object “Container”. All this can be created with mdAL-snippets – so this really is just a matter of seconds. Very readable:
A master-object “Container”
A cart and list page
A template “Name”
Then, we simply “Generate AL Code”
And as a result, you’ll get compile-able code in many objects, with many models implemented. The generated code can be found in the src-gen folder:
It has…
implemented the table, with:
No. Series
Search Name with validation code
A blocked field
Comment, which is a flowfield to the default (and extended) comment table
Fieldgroups are defined (dropdown and brick)
Record maintenance to sub tables (commentline)
AssistEdit for No. Series
…
a setup table for the number series
A list page with
Action to Comments
A card page with
AssistEdit implementation
Action to Comments
A setup page in the right model
An enumextension for the comment line
And more!
How awesome is this? Don’t need comments? Remove it! Don’t need No. Series? Remove it!
Oh – did I forget to add “Dimensions” into the equation? Hold my beer…
Done! 2 seconds. Result: we now have Dimension fields and code where necessary, like:
And even with events:
2- Posting Routine & Documents
Not enough? Then let’s now take it to a whole other level. Posting routines and documents. Consider this:
I agree – it’s too simple, I need more fields – but look what happens when you generate code. All these files were generated:
A glimpse:
All posting codeunits with boiler plate code
Source Code implemented and extended in source code setup
Journal, Ledger entry and register
The journal already has 15 “boiler plate” fields (did you see how many I defined?)
Header and line with all boiler plating
Document number series as well
Document status (release/reopen)
Posting No. Series
…
Just too much to mention.
My conclusion and recommendation
My conclusion is simple: this is going to save us a crapload of time! Even if you “just” need some master table and some sub tables (yes, sub-entities are possible as well!). In my opinion it’s an unmatched level of code generation in our Business Central Development world. This is going into the extension pack! My recommendation: play with it! Get used to it! Give feedback to it if you have any! And once you’re comfortable with it – thank Jonathan! ;-).
This is my first post of 2021, so I’d like to take the opportunity to wish all of you all the best, all the health, all the safety, and .. may we finally meet again in person in 2021! I’d be very happy to have much less of this:
And more of this:
Anyway ;-). Let’s get to the topic..
I got an internal question from our consultants on how they are able to access the license information of the customer for an OnPrem customer. There are a few scenarios where that’s interesting. Just to check which license is in production. Or how many objects or users did they buy. Well – anything license wise ;-).
And you know – a consultant usually doesn’t have access to SQL Server, PowerShell or anything like that. So it had to be accessible from the web client.
I was like: that’s easy! There are virtual tables that contain that information: – “License Information” contains the actual text of the license file. – “License Permission” contains the information on object level. And since we can run tables, it’s just a matter of finding the right Table Number, and off we go.
You might remember my blogpost: Getting not-out-of-the-box information with the out-of-the-box web client ? It explains that I can get to all objects through table 2000000038, and there, I can find the corresponding table numbers: – “License Information”: 2000000040 – “License Permission”: 2000000043 So what the heck – just run it with that ID, and that’s it, we’re done! Consultants have a solution!
Well … no …
In the web client, you can’t just run every system table. I suppose, the ones that is not actual data in a SQL Table, and that need to get the data through some business logic, can’t be shown while running the table.
So here’s a little trick
You can show those tables by creating your own page based on the table. And with the wizards of Andrzej’s extension, it’s super easy to do! Just:
Create a page (List):
Add the fields:
Done! When you run this page now, you’ll see the license information from the Web Client
Is this useful? For us it is. Our consultants now have a page (it’s part of one of our library-apps) they can easily check themselves without bothering someone who is familiar with PowerShell.. .
If you want to know how to do it with PowerShell – well – there is an out-of-the-box CmdLet: Export-NAVServerlicenseInformation
I personally didn’t see another easy solution for getting to that information from the Web Client. I do know, this involves development and deployment .. And also maintenance of an app .. . So if you have a less intrusive solution, I’m all ears :-).
I’m not writing a blog about every single (new) command in my “CRS AL Language Extension”. But this Sunday, I added an interesting one. One that I should have had created for a long time – but simply didn’t think of it, until Daniel (TheDenster.com) explicitly asked for it on GitHub. Just imagine, you’re building an app, with many pages, and you want to build a page, test it, build the next one, test it .. . You kind have to: – Publish the app – Run the object after the app was published The request was simple: combine both steps in one command in VSCode:
Publish & Run Current Object
And personally, I have been using it all day :-). So I thought – let’s just share it, so people know it exists and don’t have to find out by accidentally running into it ;-).
How to run it
Well, it’s simple, just:
What does it do .. exactly!
Well, it IS going to run a command from the “AL Language” extension first. The exact statement is this one:
The reason why I’m running the “without debugging” version, is because it’s the only way for me to mitigate that 2 browser tabs will be opened. Because the publish-step will open a tab, and then the “Run Object” (which is simply opening an URL) will open another tab. You might say: “dude, you got a setting for that in the launch.json”. And you are right. In the launch.json, we can set the launchbrowser to “false”:
But …
This only works with the command “publish without debugging”, not with the normal F5. So .. that’s why I’m using this command ;-).
Keyboard Shortcut
You might have been using the “Run Current Object” already in the past.
Well, this still exists, and has a keyboard shortcut for it. I didn’t do that for the new command, but do know, you can do that yourself! You can even take this keyboard shortcut, and attach it to this new command. Simply go into
Find the command and press the “+” sign on the left of it
Now, press the keyboard shortcut that works best for you – you can even override another one, like I did here to override the already existing “Run Current Object” shortcut:
Quite some time ago, I started with commenting on that (I actually did that because one of the features at that point (being “code customized base app”), was something that I felt needed some criticism (and still do, by the way ;-))). So, by matter of tradition – and it being a good way for me to get myself informed – let’s go through the points, and give some comments or extra information (if any).
Is this post useful? May be not – you can just stop reading and get all the facts here:
What I usually do, is categorize them in my favorite added features, and my somewhat less favorite features.. . This is something you need to decide for yourself, obviously – but it is still my blogpost, so yeah, let’s just do that again ;-).
Oh yes – you read it right! This came a little bit by surprise to me (I might not have been paying attention) – but a very welcome surprise ;-). Finally we’ll be able to better tune indexes – at least in the “adding” part. It doesn’t say much about the limitations .. but any added ability along these lines is very much appreciated!
If you want to know more about the actual scope .. Kennie explained some more on this Twitter thread:
In v18, we only support that you can add indexes to the base table.
All of #msbc365bc twitter folk: Would you consider the need to add indexes to other table extensions essential? Or should we rather spend time on some other indexing features?
Again .. HUGE! Finally! Finally we’ll be able to extend a report in stead of copy-ing it to our own range, and substitute it. This will tremendously simplify customizations in PTE and OnPrem. Awesome! It was definitely one of the frustration points of my devs.. .
This is one of the Application enhancements that Microsoft worked on – and an important one. I think every one of us at some point either had to correct dimension postings, or create some solution that users could do that themselves. Nice one! At least, we’ll be able to correct dimension on the General Ledger Entries (not the source docs .. but it’s a big step forward!)
Performance
is always a big topic. Especially in the
cloud. I remember a big Twitter thread
recently that basically questioned again the performance in the cloud .. and
the huge difference compared with OnPrem.
So I look forward to ANY kind of performance improvements!
The improvements in this topic are somewhat limited, though – they basically applied the same optimizations as we’ve seen regarding factboxes, but now also on Role Centers, where parts will only load when shown (so if you’d scroll down). That said – the role center is an important page to improve nevertheless ;-). I do hope though there are more performance improvements in the queue.. something in the line of better (and more intelligent?) usage of the “SetLoadFields” principle in various scenarios.
It’s a
general title, but when I saw the first item in the list of improvements, I
already liked it a lot: “Double-click a record in a list”. Adding “intuitivity” .. cool!
All Features
Again, the rest of the points seem – in my limited world – somewhat less important, simply because we either didn’t miss it, didn’t think of it, solved it already differently (like the printing features), .. or anything else. I sure think in most cases, and for many partners, they absolutely have value – if not more than what I listed above!
I decided
to have the complete list below, so you’ll find whatever above, also below,
divided in the categories that Microsoft has put them in – some with a bit of
comments of my own ;-).
Administration
Microsoft
did some improvements for partners to support heir customers, to admin their
tenants and so on.
Improvements for the Delegated Administrators– It seems that delegated admins will be able to better service their customers – mostly focused on the job queue abilities. It sounds really good – and it is. Of course. But … In my opinion, the entire CSP-user-types story should be revised .. big time. It is NOT ok to just be one happy family of admins within a customer’s tenant .. with all kinds of roles and multiple CSP partners.
It seems that there are quite some new application features .. That’s nice to see. Many people have been wanting more application-updates. Why there or in my “less” favorite section – well – I’m a developer ;-). So .. not really my place to have opinions ;-).
What
Microsoft means with this is simply the collaboration between other Microsoft
365 service. In this release, mostly Teams, Word
and cloud printing.
Support cloud printing using Microsoft Universal Print– Microsoft will use its service “Universal Print” to deliver a straightforward printing experience. You’ll be able to send documents and reports to any of the printers defined in your Universal Print management page. While this is probably for many an awesome feature – in our case, we “solved” cloud printing on our own .. so I just hope this new feature doesn’t conflict with our solution ..
Country and regional
Yep,
since this version, even more countries will be able to use Business Central:
Power
Flower. This must be my personal most
underrated topic .. something I really need to start diving much more into than
I already did. I will .. at some point
.. I promise .. :-/
Then you know there’s quite a lot of information .. just under your fingertips of the web client.
And that’s also the case for API information. Because really .. figuring out het available API’s in your system isn’t that easy at first sight. It is easy when you know where to look, though.
Well, if you want, you can get that info from a system-table. Namely table “API Web Service”, which is table 2000000193. So, if you would add “?table=2000000193” in the URL .. you’d get a list of all available APIs :-).
At least … if you’re working OnPrem. For some dark reason, I (admin) am not allowed to read that table in SaaS .. .
I wonder why .. I really am … . If anyone has a clue why – please put it in the comments.
Does that mean there is no solution in SaaS? Can’t I list all API endpoints simply from the web client? Well .. still yes, but a little bit with a detour. In fact, it was the API guru AJ that gave an alternative table that also has quite a lot of metadata: namely table “page metadata” (2000000138). If you filter the data on pagetype “API”, you get almost exactly the same as with the “API Web service” table – although, only pages, not queries – but at least it works in SaaS.
But then you might wonder .. Isn’t there a table “query metadata” that I could use as well? Sure, that would be table 2000000142 :-). But … that one is again only available OnPrem for another dark reason :(.
Last but not least, you might wonder if there was an API way to get to all APIs. Yep! And it was again the API guru himself that showed me this undocumented feature. The URL you’ll need for this is:
For a long time, we have been looking for a solution to be able to set up a docker environment, and then download the necessary apps from AppSource.
Why?
For different reasons. Like, to set up a development environment. I know we can do that in an online sandbox, but .. we’re already doing this on docker for OnPrem development, so if we could simply set up dev environments for any kind of development we do – in a uniform way – including for SaaS development – well, docker would be my choice. I mean: a developer shouldn’t care where the app ends up – he just needs to create his feature, and then the CI/CD pipelines should take care of the rest. Where it is deployed .. why should a developer care? And … since we’re talking about CI/CD pipelines– that is actually the main reason why we need to be able to set up docker containers with AppSource apps. Because that’s what we need to validate. Against multiple versions of those apps as well, by the way! And running pipelines against online sandboxes is simply not a valid, structured, scalable approach for the future. We need docker for that, and we need DevOps to be able to get to the AppSource apps “somehow”.
So .. how do we do it?
Well, you don’t. There is no way you can install AppSource apps into docker containers at this point. The only way we can do somewhat decent pipelines against those AppSource apps, is to try to contact the companies from those apps, and pray they are willing to give us runtime apps (in a decent timeframe), which we need for symbols. But that’s nowhere near convenient, let alone scalable.
So .. did you just waste my time?
No. We know Microsoft basically needs to make that available. Like .. they could “simply” provide an API where we can download the runtime apps (which, I believe, they can even build themselves as well – or at least they could make it a requirement). But .. they might need some “convincing” that we need it (like .. yesterday ;-)).
Not too long ago, I asked about opinions on whether I should just keep blogging, or if I should also jump on the streaming-wagon like many already did .. and start streaming content about business central. In fact, that poll is still on my website, and today, it says this result:
So, a vast majority isn’t really waiting for me to bring out any streaming videos :-).
But .. in all honesty .. I was too curious to try out this streaming thing as well – and in my opinion, I do think that some topics are more useful to stream about, than it would be to blog about.
Conclusion: I’m going to have to work a bit on the quality of about everything. But other than that, please tell me – what do you think? Wasted time? You want to see more? If you have topics in mind – I’m all ears. Use the comments below this post, I’d say ;-).
I haven’t exactly blogged much lately, but that doesn’t mean I haven’t been active for or in the community. You might have seen my previous blogpost for example, where I explored the wonders of streaming (which is for a near future to pick up, by the way ;-)), I also have contributed a few sessions on a few occasions. One of these occasions was “Dynamics Con” – a free conference that was organized for the second time. You might remember my blogpost about it of when it was completely new: New Upcoming Conference: DynamicsCon. I’d like to highlight some points of my session in this post. You can find the YouTube recording at the end of this blogpost.
The Topic
During the Spring 2020 conference, I saw a session “Business Central Life Hacks” by Shawn Dorward (which has a blog “https://lifehacks365.com/” – how fitting is that ;-)) – which focused on functional life hacks. This inspired me of doing a developers-version of it. So I asked him if I could “steal” his title, and go for “Business Central Development Life Hacks“. I submitted the session title, and I was lucky enough to have it voted high enough to be selected as a session.
The Content
I wanted to focus on the not-so-obvious topics this time. So I tried to explore new extensions, new shortcuts and so on. I ended up with about 6 hours of content, that I had to cut back to about 40 minutes. This is what I covered (with links to the corresponding time in the video)
We probably all used quite some search-functionality within VSCode. I do believe though that there is more to the “Searching” than the obvious. Like saved search results, regex searching, and so on. Very useful ;-).
You must know by now I am a Multiroot-workspace-fan, a “multirootie” if you will ;-). I walk you a bit through the advantages you have when working multiroot.
I didn’t spend too much time on “settings”. I have loads of settings, but I just wanted to share a bit on how you can improve the UX for al development (in terms of visibility of the files in search and the explorer), and even synchronize settings on multiple PCs.
Of course ;-). If I talk VSCode, I usually talk snippets as well. In this case though, I talked about on how you can disable snippets ;-). Yep, you ARE able to disable snippets from any extension, to not clutter your intellisense.
And yes, I was able to find two more VSCode Extensions that are (in my opinion) interesting for AL developers:
Error Lens, which gives you information about errors and warnings within your code.
Docs View, which gives you documentation of the current selected statement, method, .. In a separate windows, which I found quite useful in quite some cases.
What is your opinion? Should I add them to the “AL Extension Pack“? I probably will ;-).
The “hack of all hacks” in my opinion: the MSDyn365BC.Code.History project from Stefan Maron. One of my favorite community contributions which I seem to be using all the time. I wanted to share some tips that Stefan didn’t show yet on his blog: to browse through the project with VSCode (of course ;-)).
During the questions, I had some time to share yet another piece of content that I had to remove from the content before, and that is the usage of “VSCode Tasks”: a configurable ability to run PowerShell (and other tasks) from the Command Palette. I use it all the time .. :-).
Something Different
As you might have noticed if you saw the entire video – I wanted to do it a tad different. In stead of switching to PowerPoint all the time, I decided to use VSCode as my PowerPoint, and use an md (MarkDown) file, and VSCode BreadCrumbs to navigate through the agenda of the session. Personally, I liked how this was going – but .. did you? Any feedback is always appreciated.
The session
So now you have all the references that I used, the project, the video at the specific times, the links, .. So all that’s left for me to do is to share the actually video :-).
Recently, in our product, we enabled support for the new “Item References” in Business Central. Basically meaning: when upgrading to the new version of our product, we wanted to:
Make sure our code supported the new “Item Reference” table in stead of the old “Item Cross Reference” table
Automatically enable the “Item Reference” feature” (necessary, because when your code depends on this new feature, it has to be enabled ;-)).
Automatically trigger the data upgrade (which basically transfers all data from one table to the other)
Obviously, this is all done using an upgrade codeunit. And this made me realize that there are some things that might not be too obvious to take into account when doing things like this. So .. blogpost on upgrade codeunits .
The base mechanics
As such, an upgrade codeunit is quite simple. It’s a codeunit that is run when you’re upgrading an app from version x to version y (which is higher, obviously). In this codeunit, you will typically move data from one place (field or table) to another.
It’s a codeunit with the subtype “Upgrade” which basically enables six available triggers.
OnCheckPreconditionsPerCompany/ OnCheckPreconditionsPerDatabase You can use the “precondition” trigger to test whether an upgrade is possible or not. If not: raise an error, and the upgrade process will be rolled back, and the previous app will still be available (if you’re running the right upgrade procedure ..).
OnUpgradePerCompany / OnUpgradePerDatabase In the “OnUpgrade” trigger, you’ll typically put the upgrade code
OnValidateUpgradePerCompany / OnValidateUpgradePerDatabase And in the end, you’d be able to validate your upgrade. Again: if not valid (let’s say, if the field still has content), you can again raise an error to rollback the upgrade process.
Avoid running the upgrade multiple times
To track whether an upgrade has run or not, Microsoft created a system that they call “Upgrade Tags”.
Upgrade tags provide a more robust and resilient way of controlling the upgrade process. They provide a way to track upgrade methods have been run to prevent executing the same upgrade code twice. Tags can also be used to skip the upgrade methods for a specific company or to fix an upgrade that went wrong.
Microsoft Docs
Son, you’ll have to take the upgrade tags into consideration, which means:
Create/manage a unique tag for every single upgrade method (usually including a companyname, a reason and a date)
Do NOT run the upgrade if the tag is already in the database
Add the tag in the database when you ran the upgrade
Add the tag in the database when you create a new company
The latter, if often forgotten, by the way.. . But obviously very important.
This, my friends, calls for a template pattern (if I may call it that – people have been quite sensitive about that word ) for an upgrade codeunit. But let’s leave that for later, when I talk about – yes AJ – SNIPPETS!
Update – AJ and Stefan Maron made some comments down below that are worth mentioning: When you install (not upgrade!) an app, you have to make sure that all upgrade-tags are pre-registered. You can do that by simply call the UpgradeTag.SetAllUpgradeTags(); in an install codeunit (You’ll find a snippet for an install-codeunit in “waldo’s CRS AL Language Extension” under “tinstallcodeunitwaldo”).
Let’s say that dependency is a field that you added in that (now obsolete) table, and has to be moved to your new field.
Or it could also be some business logic, where you depend on a value of a field which is not completely obsolete, and handled differently
…
In any case – it might very well be that the depending app will need this data to do whatever: move its own fields out, or move default data to new extension data, or .. Whatever.
In other words: You (as Microsoft or as ISV) simply can’t assume nobody needs the data anymore. You really can not.
In other words: Don’t delete the data. Move it to another field/table, make the original field/table obsolete. But that’s it! Keep the data and don’t kid yourself you need to delete it from the obsolete table, just because you think that’s cleaner. I can’t stress that enough! Just assume that in the upgrade process, there will be a dependent app that will be installed after your app, so which will run the upgrade after you as well. It will need the data. Keep it. Just keep it.
Don’t delete it like we did . Yep, we didn’t think about this, and we messed up. And we had to redo the upgrade. No harm done – we noticed quite fast – but still. I hope I prevented at least one (possibly major) problem with this blogpost ;-).
Preserve the SystemId!
Another not-so-obvious thing but in my book VERY important, is the SystemId. When you’re transferring a complete table (which in my case, was the “Item Cross Reference” to the “Item Reference” table, think about transferring the SystemId as well! I mean, the SystemId might be the primary key for anything Power-stuff, or any third-party application that is interfacing with your system through APIs. When you would simply transfer like this:
It would create a new SystemId for every record in the new table. Basically meaning all these records would be “new” for any interfacing system.
Not.Good! And guess what – this was the version of Microsoft which I was lucky enough to catch in time. In the meantime, Microsoft has fixed it – but I do hope they’ll remember for any other table in the future that will be transferred to a new table.
Just set the SytemId, and insert with the second “true”. Didn’t know this second boolean existed? Well, here is the official docs: Record.Insert Method (it turns out there are different pages about the insert method – it took me a while to find the right one.. ).
Snippets
As promised, I have some snippets for you, and they are already in “waldo’s CRS AL Language Extension“. The three snippets concerning upgrades are:
tUpgradeCodeunitwaldo
tUpgradeTableProcedurewaldo
tUpgradeFieldProcedurewaldo
tUpgradeCodeunitwaldo This snippet has the base mechanics of what I was talking about above. You see the upgrade tag, the event to subscribe to to manage upgrade tags in new companies. This snippet will also create a default way to create a tag, including the current date. How cool is that ;-).
The script deviates a bit from what was described on Microsoft Docs. I just like this much better as everything about this upgrade is nicely encapsulated in one codeunit, including the event.
tUpgradeTableProcedurewaldo This snippet is meant to be used in an upgrade codeunit, and will generate code to move all data from one table to the other. And you see already that it handles the SystemId correctly as well. Again – very important! As the last line, you’ll find the procedurecall, which is meant to be moved in the right “OnUpgrade…”-trigger.
tUpgradeFieldProcedurewaldo Quite similar to above, but this snippet is intended to move one field to another field. And again, the procedurecall at the end is to be moved in the right trigger.
Feedback?
That was it! If there is anything I left out, I’d be happy to know! You can leave a comment, you can send me a message, .. . I’m always happy to hear!
The last couple of months, there has been quite some questions from ISVs (and especially the “old” ISV’s) on how to register their apps in their specific situations. Just to name a few:
I only have OnPrem business – and I want to create a new product. How do I get a new object range?
Do I still need to go through the CfMD program to certify my apps?
App range? RSP range? What the hell is the difference?
With my new app, I don’t want to go to AppSource just yet – but first OnPrem – what do I do?
…
So .. in short: quite some questions on anything product registration and the obligations and/or restrictions that come with it.
Well, you might have seen the announcement from Microsoft ( http://aka.ms/bcpublisherprogram) – Microsoft comes with a new program which replaces the “Registered Solution Program (RSP)” and the “Certified for Microsoft Dynamics (CfMD)” program. And not only that .. there is more to consider. Much more…
Microsoft Dynamics 365 Business Central Publisher program
What the name indicates for me is that there is clearly an “AppSource first” strategy. Why? Well, the word “publisher” is typically used for anyone that puts apps in the cloud on a marketplace. Well, sometimes also the word “author” is used – but anyway .. let me make my point here .
Cloud first strategy
Now .. It shouldn’t be necessary to explain how many advantages there are in the cloud version of BC. I mean – first of all: all baseapp features will work in the “cloud first” (get it? ) and OnPrem “if you’re lucky”. And .. on top of that, you get the ability of all those extra services that come with the cloud. The Power-“fluff” (I should be calling it different, I know – but that area is still too grey for me ;-)), the azure stuff, … and so.much.more. The BaseApp will have a cloud first mentality: and so should you.. .
So, for Microsoft, it basically comes down to convince/force/… the ISV mindset to be cloud first. It’s as simple as that. In short:
Do you have a product? Put it on AppSource.
You don’t need it on AppSource but only OnPrem? Still, put it on AppSource, in order for you to be able to implement it OnPrem.
You don’t have this cloud first mindset? Then it might happen that you’re going to have to pay fees.
…
Wait … fees? What??
Are we being “punished” for selling OnPrem only?
Well .. If you will only do Onprem business, and you don’t register your apps on AppSource: yes, you’re going to have to pay a fee. Is that being “punished”? I don’t look at it that way. I look at it the way that Microsoft tries to motivate you for doing the “right” thing, for following their strategy. To sail in the winds they are blowing … .
Do know .. Microsoft absolutely does not want to let you pay for doing your business. Not at all. The best day for them would be that they don’t have to invoice anyone for this at all .. because that would be the day that all partners finally have embraced the cloud-first-strategy.
Because, let’s be honest, not all partners have really been listening and acting on Microsoft’s “cloud-first” approach, have they? It’s not like it hasn’t been obvious though:
Microsoft moved “infrastructure software” to the cloud: Azure was born
Microsoft moved end-user software to the cloud: Office 365 with even Sharepoint, Exchange, …
…
Did you really think ERP wasn’t going to follow the same approach? Hasn’t that been obvious for so long already? I’m just saying .. .
In other words, anyone who has already invested so much for moving to AL (apps), for moving to a “cloud-ready” architecture, for moving to AppSource – they are good! No extra investment needed. And they are – to use Microsoft words – “rewarded by freeing them from program fees or additional test efforts, outside of what is required for publishing to AppSource“.
Who would have to pay?
To not mis phrase anything, let me quote Microsoft on what you can expect:
This program will introduce fees that will gradually increase from September 2022 onward for publishers whose resellers have sold on-premises solutions to new customers without an equivalent cloud-based solution. The fees will only be applied to sales in countries where the Dynamics 365 Business Central online service is available. Solutions registered to existing customer licenses before the program cut-off date will not be impacted by program fees. However, adding new non-AppSource solutions after the cut-off date will be impacted.
Kurt Juvyns – Microsoft
For me, that means: Any ISV, who implements (by partners or by itself) products (apps) that do not exist on AppSource, will have to pay fees to do so.
In Practice
Let us look at this practically.
You will still be able to do OnPrem business. Check!
The only thing you need to do is making sure that you register your app on AppSource as well. That’s it.
So .. You might want to make sure to follow the cloud rules, no? Hybrid? Hell no! CodeCustomisedBaseApp? I gues not.
You will be able to reuse the same codebase for SaaS and OnPrem. Check!
You don’t even have to. Just make sure your product is registered, and if necessary, you can branch off and include some OnPrem necessities to your code to use at your OnPrem customers. This is extra work – I would try to avoid that.. .
As long as your apps are registered on AppSource, and can be used in the cloud!
You don’t need to certify for CfMD anymore. Check!
That has been replaced by AppSource validation.
You will be able to use all registered number ranges on all environments. Check!
If you already have a registered number range, you don’t have to renumber to comply with this new program. Check!
What’s next?
If it wasn’t clear yet, let me make it clear in my own words:
If you haven’t started to modernize your solution: start now
If you’re still in C/AL (v14 or not): start now and make sure you build your products into apps, and that it works on the latest version.
If you’re still hybrid, it means you’re still on v14: start now to unwind the reason why your solution is hybrid and make it real apps in the latest version.
If you’re having a code-customized base app: start your way to real apps now.
If you did your homework, and moved your product to cloud-ready solutions already – but you’re not on AppSource just yet? Well, register for AppSource. Now. It might take longer then you think (affixes, tooltips, … ) ;-).
Are you still stuck with all your dll’s, which makes your product OnPrem only? Make your way to Azure Functions or another technology, to make sure you can register your apps on AppSource.
…
That’s all I could come up with :-).
If you need help – there are people that can help you. If you think I can help you, just contact me – if I have time, I will – but there are also dev centers that are specialized for helping/coaching partners to make this move to the cloud. More info you can find here: Find a Development Center (microsoft.com)
Questions?
I’m sure you’ll have a ton of questions, depending in what situation you are. I don’t know if I answered any – but some of the questions I had a few weeks back, are sure answered for my own personal situation. But still – IF there are questions, you can reach out to Microsoft!
Thoughts
Looking back at the focus, the sessions, the blogposts, … that I did that past years, it seems that the recommendations that I have been doing were not that far from the truth. Like:
Cloud-ready software architecture
No Hybrid (I was quite passionate about that…)
No code-customized base app (same passion .. This was never an option for me .. Never ever)
Don’tNet (which I shared in my latest 2 NAVTechDays appearances – even Vjeko agreed with me ;-))
CodeCops (which should all pass! Or at least most of them)
DevOps (imho the only way to manage in your product lifecycle – and, since you’ll be on AppSource, and you’ll have to manage your app for upcoming version – well – DevOps is going to be crucial ;-))
No migration but rebuild (I have been advocating this, not on blog, but in any conversation that I have been having with colleague-partners asking me on “what do I do”. I’m very happy to have gone the “totally rebuild” way, not having to pay any attention to refactoring, just a complete rewrite of the product. It gives you all the freedom in the world).
..
Because, you know, if you comply with all this, the step to AppSource is pretty easy.
For a few people, this will be a very “current problem”, for other, that have suffered through years of “unstable” NAS and Job Queue for that matter .. this upcoming blogpost might be interesting.
The primary reason for this post is an issue that – after quite some debugging – seemed to be an issue introduced in version 17.4 (and currently also v18.0). But I decided to tackle the issue in a way it’s a bit more generic to deal with “unstable Job Queues” in stead of “just this particular issue”.
Disappearing Job Queue Entries
Although this post isn’t just about this current bug that haunts us in this current release of Business Central (17.6 / 18.0 at the time of writing) – I do want to start with that first, though. It has to do with Job Queue Entries that were disappearing for no apparent reason. I traced it down to this code change in the default app.
In a way: it happened quite frequently (multiple times a day) that recurring job queue entries were disappearing .. which obviously prevents the execution of the task at hand.
There are some prerequisites to this behavior, though. And you can read them actually quite easily in code:
They must be part of an existing Job Queue Category
There must be multiple Job Queue Entries within that category
It must be a stale job, which means:
It must be “In Process”
Although it shouldn’t have an “active” TaskId (which kind of means, that it shouldn’t exist in the Task Scheduler tables).
That’s a lot of “ifs”. But .. in our case .. it happened. Frequently. We use categories to execute jobs in sequence and prevent locking or deadlocks.
Good for us, the issue is solved already in vNext (Minor). This is going to be the new version of the code:
But ..
This also tells us something else. I mean .. Microsoft is writing code to act on certain situations which means that these situations are expected to happen:
Task Scheduler might “lose” the Task Id that it created for a certain Job Queue Entry
When a Job Queue is InProcess, it might happen that it “just stops executing”
Or in other words: the Job Queue can act unstable out-of-the-box, and Microsoft is trying to mitigate it in code.
There are a few good reasons why the Job Queue can act somewhat unstable
Before you might go into hate-and-rant-mode – let me try to get you to understand why it’s somewhat expectable that these background stuff might act the way it does. Just imagine:
Reconfigure / restart a server instance – may be while executing a job
Installing an app with different code for the job queue to be executed – may be while executing a job
Upgrade-process
Kicking a user – may be while executing a job
…
Some of which we already noticed some instability just after any of these actions. And probably, there are many more.
So, yeah .. It might happen that queues are stopped .. or may be frozen. Let’s accept it – and be happy that Microsoft is at least trying to mitigate it. I’m sure the code in vNext is for sure a (big) step forward.
But .. may be there are things that we can do ourselves as well.
Prevent the delete
As said, above issue might delete Job Queue Entries. If you don’t want to change too much of your settings (like remove all categories until Microsoft fixed the issue) to prevent this, you might also simply prevent the delete. How you can do that? Well – just subscribe to the OnBeforeDeleteEvent of the “job Queue Entry” table ;-). Here is an example:
But .. as said – this only solves this one issue that is current – but we can do more… and may be even look somewhat in the future and make the Job Queue act more stable for our customers …
Restart just in case
One thing we learned is that restarting a job queue never hurts. Although “never” isn’t really the right word to use here, because restarting may come with its caveats.. . But we’ll come to that.
We noticed that we wanted to be able to restart job queues from outside the web client – in fact – from PowerShell (Duh… ). Thing is, restarting a Job Queue Entry is actually done in the client by setting and resetting the status by simply calling the “Restart” method on the “Job Queue Entry” table.
If we decouple this a little bit .. I decided to divide the problem in two parts: – We need to be able to run “any stuff” in BC from PowerShell – We need to be able to extend this “any stuff” any time we need
Restart Job Queues – triggered from an API
I decided to implement some kind of generic event, which would be raised by a simple API call .. which would kind of “discover and run” the business logic that was subscribed to it.
So, I simply created two publishers
And trigger it from this “dirty workaround” API (I actually don’t need data, just the function call from the API).
As a second step, I simply subscribe my Job Queue Restart functionality.
So now, we can restart job queue entries in an automated way.
When do we restart?
We decided to restart at every single app deploy (so, basically from our build pipeline) and at restart of services (if any). Because: – It doesn’t hurt – It also covers the upgrade-part – It would also call extra “OnSetDatabase” code that might have been set up in the new version of the app
Language/Region might be changed
One caveat with an automated restart like that would be that the user that does the restart – in our case, the (service) user that runs the DevOps agent – will be configured as the user that executes the jobs. What this might have as a consequence is that the language and/or region might have changed as well. And THAT was a serious issue in our case, because all of a sudden, dates were printed with US format (mm/dd/yy) in stead of BE format (dd/mm/yy). You can imagine that for due dates (and other, like shipment dates), this is not really a good thing to happen.
Long story short, this is because there is a field on the “Scheduled Task” that is leading which format the system will use in that background task. That field is “User Format ID” (don’t ask how long it took for me to find out…).
So, we decided to override this value whenever a JobQueueEntry is being enqueued … .
You see a hardcoded value – we didn’t literally do it like this – it’s merely to illustrate you can override the language/region on the background task ;-).
A big advantage of this functionality is that all your jobs to clean data from your own Log tables, can be hooked into this functionality – and you basically only have one job in the Job Queue entries that manages all these cleanups. Only one job for many cleanups.
My recommendation would be to apply this kind of “combining” jobs whenever possible and whenever it makes sense. 5 jobs have less chance to fail than 50.
We were able to reduce the amount of Job Queue entries by simply call multiple codeunits within yet a new codeunit – and use that new codeunit for the Job Queue ;-). Sure, it might fail in codeunit 3 and not call codeunit 4 for that matter – so please only combine when it makes sense, and it doesn’t hurt the stability of your own business logic.
Manage your retries
Just a small, obvious (but often forgotten) tip to end with.
Depending on the actual tasks that are executing – it might happen that things go wrong. That’s the simple truth. If you are for example calling an API in your task, it might happen that for a small amount of time, there is no authentication service, because the domain controller is updating .. or there is no connection, because some router is being restarted. And in SaaS, similar small problems can occur to any 3rd party service as well …
It is important to think about these two fields:
Maximum No. of Attempts to Run– make sure there are enough attempts. We always take 5
Rerun Delay (sec.) – make sure there is enough time between two attempts.
It doesn’t make sense to try 3 times in 3 seconds, if you know what I mean. For example, a server restart, or a router restart will take more than 3 seconds.
I would advice to make sure there are 15 minutes of retry-window: 5 retries, 180 seconds delay. This has a massive impact in stability, I promise you ;-).
Conclusion
Let’s not expect that the Job Queue will run stable all the time, everytime, out-of-the-box. There are things we can do. Some are small (settings), some involve somewhat more work. I hope this helped you at least a little bit to think about things you could do.
You might have read my post “Visualize app.json dependencies in VSCode (using GraphViz)” where I explained “another” way to generate a dependency graph. Another than what? Well – other than the DGML that was just announced on Microsoft’s Launch Event of Business Central 2020 Wave 2 as being “on the drawing board”.
I don’t think a lot of people are that familiar with running the alc.exe .. or handling any of its parameters just from the top of their heads. So I decided to dive a bit in TypeScript again to make that part a bit easier for you.
The function will:
1) Find your working directory
2) Find the alc.exe
3) Find out the symbol path from your settings
And run the command that needs to run to generate the DGML of your current project.
This is obviously an alpha version.. and I’m pretty sure it will never become a beta as I expect Microsoft to come with their own command or settings (if not yet already).
For reasons that are not too important, I am trying to find a way to “describe my custom APIs”. You know. You’re at a project, you had to implement an integration with a 3rd party application, and you develop some custom APIs for it. When done, you have to communicate this to the 3rd party for them to implement the integration, so you need to pass the documentation, and kind of help through “how to use Business Central APIs”, and a description of the responses, requests and possibilities you have with “typical” Business Central APIs.
Disclaimer
Now, before I continue, let’s make one thing absolutely clear. I’m by far not an API god, guru, master or whatever you call an expert these days ;-). I’m just an enthusiast, that tries to find his way in the matter .. . The reason for this blogpost is merely because recently I got the question “how do I communicate this API that I just made with the customer that has to use it?“.
Let’s see if we can find a decent answer.
When you get questions like this, it makes sense to try to find out what they like to get as an answer. What would I expect as a decent, useful description of an API that I’d have to use?
OpenAPI / Swagger
And we can’t deny that when we need to integrate with something, we LOVE to get some kind of OpenAPI / Swagger description. As such: a document (or set of documents) that defines/describes an API following the OpenAPI Specification. It is basically a JSON or YAML file, that can be represented in some kind of UI to make it very readable, testable, .. . An industry-accepted-standard sort to speak..
There is a lot to say about this, and I don’t want to bore you with the specifics. I’ll simply give you a decent resource for you to read up: OpenAPI Initiative (openapis.org) / OpenAPI Specification (Swagger) . And a screenshot of what it could look like, if you would have some kind of graphical UI that makes this JSON or YAML much more readable:
As you can see – a very descriptive page, with a list of API endpoints, a description on how to use them, and even a way to try them out, including details on what the responses and requests would have to be. Even more, this screenshot also shows the abilities: like, I can only read the accounts, but I can read, update and delete trialbalances (if that makes any sense .. ).
But .. this does not exist out-of-the-box for Business Central. And this definitely doesn’t exist for custom API’s out-of-the-box, as .. you know, they don’t exist “out-of-the-box” ;-).
So .. what CAN we do out-of-the-box?
Well, the most basic description we can give to our customer/3rd party is what we call an “edmx” (Entity Data model XML). Which basically is an XML description of your APIs. You can simply get to the edmx by adding “$metadata” to the url. Or even better: “$metadata?$schemaversion=2.0” to also get the enum-descriptions.
Well, I can tell you – that’s not an OpenAPI – and in general, the developers at the customer are going to be a bit disappointed in that kind of documentation. And they are not that wrong.
So .. Let’s see if we can convert that edmx in something more useful: an OpenAPI description of our (custom) APIs! I reached out to the twitterz.. and funny enough, it was my own colleague Márton Sági who pointed me in an interesting direction.
Márton pointed me to this OpenAPI.NET.OData library (and Sergio acknowledged it was a good direction), which, after some investigation, happened to have some kind of converter as well. So .. I tested it out, and it seems something useful comes out of it. Let me briefly explain what I did (after quite some investigation – this really isn’t my cup of tea ).
I cloned the repo and opened the solution in Visual Studio
I built the OoasUtil project
I used the exe to convert the edmx that I saved to a file
The question is – what do I do with that yaml-file, right? Well, let me get back to where I started: this OpenAPI Specification, is a some kind of default way to describe an API, and the language used is either JSON or YAML. This converter can create both. And this is just the next step to get to that graphical UI that I was talking about.
Now comes the easy part .. . You might already have googled “Business Central OpenAPI” .. and it will have you end up on a small page that explains on how to open the standard APIs as a yaml. In fact – there are two pages: there is a page for v1 and there’s a page for v2: – V1: OpenAPI Specification – Business Central | Microsoft Docs – V2: OpenAPI Specification – Business Central | Microsoft Docs The pages are essentially the same, but you’ll see that Microsoft only has prepared this v1 yaml for you (and even that one isn’t downloadable anymore it seems ). The download isn’t available for V2. But guess what – we just created our own yaml! And now you know how to do that for any kind of API as well.. :
Default Microsoft APIs v1
Default Microsoft APIs v2
Microsoft Automation APIs
Runtime APIs
Custom APIs
…
What’s more interesting on these Docs pages, is that it explains on how to display it, and even edit it. Locally, you can use VSCode extensions to edit the API documentation (essentially: the yaml file) and display it with SwaggerUI. There are previewers, there is a way to create your own node app, … . You could also use an online service, like SwaggerHub to basically do the same: display the yaml in a UI friendly way, and edit it as well. I tried both – both are very easy to do so. I’m not going to talk about how to do that, as it’s explained good enough on the Docs above. All you need is the yaml, and you’ll be able to use that to visualize it.
I tried to explain in the readme on how to use it. What I did is simple. I basically included …
.. the OoasUtil assembly so you don’t have to clone & compile it anymore from Microsoft’s github. It’s “just there” and the conversion tool is available as an exe.
.. scripts to make it somewhat easier to get the edmx from Business Central APIs. We need this in a file, not as a result in some browser. This script (1_.edmx.ps1) will output it to a file.
.. the script (2_convertEdmxToYaml.ps1) to convert this edmx to an OpenAPI Specification in yaml
.. the javascript to create a SwaggerUI (node app) based on the yaml
.. the scripts (3_StartSwagger_.ps1) to run the node app in node.js (you’ll have to install node.js of course).
As an end result, you should have a SwaggerUI-representation with all the goodies you’d expect, all applied to your custom API (like you see in the screenshot above). If you set up the servers in the yaml, you’ll even be able to interactively try the APIs as well (tested and approved ).
Plans
Our plans doesn’t end here though. Just imagine:
All your customers are on your own docker image
Within this docker image, you also provide some WebApp (like this node app) that can display this SwaggerUI
When you deploy your app, an up-to-date yaml is begin generated, and your SwaggerUI is being updated with it
This way, you’d kind of like have an up-to-date OpenAPI documentation of all your custom APIs available for your customer at all times. It’s work in progress in our company – and definitely worth the investment in my opinion.. . Of course, this is for an OnPrem scenario, but kind of the same could be done for BC SaaS. It would just be a matter of deploying the yaml to some kind of SwaggerUI-WebApp (on Azure?).
Are there alternatives?
As I said – I am by no means any kind of expert in this matter. In the same twitter-conversation, Damien Girard recommends to use “OData Client Library“. I didn’t investigate that yet. Or Stefano mentioned that it’s possible to “embed APIs on Azure API Management” to have Swagger out-of-the-box. I didn’t investigate that either. @Stefano: may be it’s interesting as a blogpost at some point? .
For Belgium, these have been useless in many situations though. Simply said: the Belgian localization has some fields that kind of replace default fields (VAT Registration Code). When I contacted Microsoft about it, the statement was simple:
The Business Central Default APIs are not localized
Microsoft
And I can understand, most honestly. So I didn’t expect anything to change, and started to abandon the default APIs completely and implement custom APIs in all situations. In fact – since the default Business Central APIs are “never” localized, and chances are that you ARE in a localized database – that means there is always a chance there is something wrong/missing from the default APIs. So “just” using them and assuming everything will just work out-of-the-box is may be not the best assumption to make.. .
Recently, my eye caught this change in 18.1:
So – it seems that for the Belgian localization, Microsoft made a workaround to foresee this “Enterprise No.” fields in stead of the “VAT Registration No.” – which kind of like would mean that the default Business Central APIs just got a whole lot more interesting for Belgium.
And may be not only for the Belgian localization. I don’t know. I notice quite a lot of changes on the V2 APIs since the 18.0 release:
It makes sense to browse through the code changes between different versions of Business Central. This is basically how I figured out the above. If you’d like to know how? Well, here’s a video where I show that using Stefan Maron‘s tool ;-).
Does this mean – let’s all use the default APIs?
Well, for me not just yet, but at least for Belgium it’s worth considering again – at least in some cases. I know about some use cases where it’s absolutely necessary to be able to connect to BC without any kind of custom API. All I can say is: make sure the data is like it’s supposed to be, test it thoroughly in the situation of that specific customer, .. basically meaning: you should be making a conscious choice in stead of just assuming all is well ;-). That’s all.