Quantcast
Channel: Waldo's Blog Microsoft Dynamics NAV & Business Central
Viewing all 336 articles
Browse latest View live

waldo.restapp

$
0
0

For a few weeks now, people have been asking “can I have the restapp you were showing” – well, here it is: https://github.com/waldo1001/waldo.restapp

But that would be not much of a blogpost – would it ;-).

Just an example…

During NAVTechDays, I did 2 sessions:

In both sessions, I talked about the concept of “dependencies”. Yes indeed – in my opinion, “dependencies” is an opportunity that we should embrace .. (just watch the “Development Methodologies” session if you want to know how and why). Now, during the sessions, the RESTApp was actually just an example on how we internally “embrace” the concept.

What does it do?

Well .. not much, really. At least if “making your life a lot easier” is “not much”, that is ;-).

It just “encapsulates” the complexity, the functionality, the responsibility, the “whatevery” that has to do with “REST Calls”.

I mean, did you ever try to use the httpclient-wrappers to do any kind of “decent” webservice call? Did you ever fumble with setting the “content-type”, or any kind of headers? And honestly – did you spend more than 5 minutes to make it work? Well, if all answers to these questions are “yes” .. Then you will appreciate this app ;-).

Or as a picture tells you more than an 1000 words, it turns this code:

Into:

I hope you agree that this second way of handling the same call, is a LOT easier:

  • Not having to declare all these different http-types, just the RESTHelper codeunit from the RESTApp.
  • Not having to care about the order of adding content type or headers or anything like that
  • Not caring about error handling.

The current functionality includes the following things:

  • A helper codeunit for REST calls
  • A helper codeunit for JSON stuff
  • Logging of the request and the response message

I could have done that with a simple codeunit, dude

I hear you. “Why an App? Why implementing the complexity of dependencies for such a simple thing?”

Well, it’s not just there to make your coding easier. It’s there to make the whole lifecycle of how you do webservice calls in all your projects easier. Just think about it. You are probably going to do REST calls in many different products and/or implementations. And many of them are different, need more functionality, …

Or – you just might run into a bug on how you do it, and have to update a bunch of other implementations .. .

Or – at some point, you might figure that having a decent logging for all outgoing REST calls would be interesting (and let me tell you that: it IS interesting (and yes, it’s already included in this app))! If you have implemented a RESTApp like this, a simple update gives you this new functionality on ALL your projects. Simply release it to all your customers (long live DevOps!). You can update all you want .. as many times as you want.

Or – at some point, you need to be able to set up “safe” sandboxes, and need to overrule all webservice-calls in a sandbox to not risk “live” calls from a sandbox (guess what – this IS something to think about!)? Just update this app, deploy, done! On ALL your customers.

I can give you lots of scenarios, to be honest.. . But tell me again – how is that codeunit going to help you in any of this?

Just an example

I know, I already had this subtitle :-). But now, it’s more like a disclaimer.

Don’t see this code as being a leading app. It’s meant as an example .. and nothing more. It is not the version we are using internally (which I’m not allowed to share, as I don’t own the code). It doesn’t have “authentication helpers” or anything like that. And it probably doesn’t have all the functions that are necessary to do all kinds of REST calls.  Obviously, this is where you can come in :-). May be it’s not a “leading app” now (if that’s an expression at all) .. you can help me make it one ;-). Please feel free to do any kind of pullrequest. Anything that might help the community. Change, restructure, .. whatever!

May be, at some point, it’s mature enough to pullrequest into Microsoft’s System Application ;-). In this current state, in my opinion, it isn’t.

Ideas

I do have some ideas that I want to include in this. Like making it easier to work with different authentication-types. Or including a way for a test-url, like request-bin, that replaces all calls with this url for you to be able to track the actual request calls that are being generated.

If you have ideas, or remarks, or anything, you can always leave a comment – or use github and use the “issues” section to add ideas (or issues).

Enjoy!


The “SystemId” in Microsoft Dynamics 365 Business Central 2019 release Wave 2

$
0
0

I’m returning after a very interesting workshop on DevOps in Berlin .. . At this moment, I’m wasting my time in the very “attractive” TXL airport because of my delayed flight. And how can I better waste my time than to figure out some stuff regarding Business Central.

Figuring out indeed, because I have barely internet, a crappy seat, nearly no access to food, … so for me this is a matter of bury myself so I don’t have to pay attention to my surroundings ;-). Anyway .. It’s not all that bad .. but a delayed flight is never nice. Anyway…..

opic of today: the new systemId!

While I was converting our app to be able to publish on the Wave 2 release .. this was something that I noticed:

All the “integration id’s” are marked for removal, and – most interesting – will be replaced by “the systemID”. What is that? Google, Microsoft Docs .. none of my conventional resources helped me finding out what SystemID is .. luckily, I did come across some information on yammer by Microsoft ;-)..

RecordIDs

You probably all know RecordIDs, right? A single “value” that referred to a specific record in a specific table. We all used them in generic scenarios, right? Also Microsoft – I don’t know if you know “Record Links”? A system table that stores notes and links to specific records? Well, the link to the record is made through RecordID. We have been using it for years .. . Now, a big downside of using RecordIds was the fact when you would rename the record (one of the fields of the keys), it would change its RecordId as well .. and all of a sudden, you could lose the connection in all tables where you stored that specific ID. Long story short – not ideal for integration or generic scenarios…

Surrogate Keys

And this is where “surrogate keys” of my good friend Soren Klemmensen came into place. He came up with a design pattern (well, I don’t know if he came up with it – but he sure advocated it for a long time) that described how to implement having a dedicated unique key of one field for a record. Basically: add a field in a table, and make sure it has a unique GUID. Make it that all these surrogate keys have the same FieldNo, and you are able to generically access the value of any of the keys for any record.

This is something Microsoft actually implemented themselves. And the code is all over the place. Even still in Wave2, we have the code to fill the “Integration Ids” as they call it. Nice system, but a lot of plumbing needed to make it work. I don’t know if there was a design pattern that described what you needed to do to apply this on your own tables – I never did ;-). But definitely interesting to do for many scenarios. Thing is .. quite a lot of work.

The SystemID

Now, as you got for the first screenshot: Microsoft is abandoning this field 8000 (that so-called “integration id”) – their first implementation of the surrogate keys – and will implement “SystemId” from the platform. Meaning: whatever you do: you will ALWAYS have a key called “systemId” for your table, which is a unique GUID in that table that can identify your record, and will never be changed – even when you would rename your record.

How cool is that! Here is an example of a totally useless table I created to show you that I have the systemId in intellisense:

What can we expect from the systemId?

Well, in my understanding – and quite literally what I got from Microsoft (thanks, Nikola  ):

  • It exists on every record
  • But not on virtual/system tables (not yet, at least)
  • You can even set it in rare scenarios where you want to have the same value (e.g. copy from one table to another, upgrade…). Simply assign System Id to the record and do Insert(true,true) – 2x true
  • There is a new keyword – GetBySystemId to fetch by system id
  • It is unique per table, not per DB. Customers and items may have same IDs, though is hard if you are not manipulating it yourself, since guids are unique. Let’s say, they are “probably” unique  – but on SQL, there is a unique key defined on the field, so only guaranteed per table.
  • Integration Record is still there, however the Id of the Integration Record matches the SystemId of the main record (Microsoft has code and upgrade in place)
  • You can only have simple APIs on it (no nesting, like lines). At this point, at least. It should be fixed soon, which is why the APIs are not refactored yet to use SystemId instead of Id.

A few more remarks

IF you would create a field that refers to a systemId, then it makes sense to use the DataClassification “SystemMetadata” for it. Not because I say .. just but because I noticed Microsoft does ;-).

Another not unimportant something I noticed: this is a system-generated field. So if you would need the fieldnumber, you have “recref.SystemIdNo”:

My take on it

From what I understood: there is work to do, but things are looking good:-). In fact, it is exactly what we have been asking for – and Microsoft delivers. Again! Great! I know this will see a lot of use in the (near) future! Within the Base Application, and in lots of apps.

Do know, I didn’t have any documentation about this – so all is based on some small piece of remark on yammer, and things I could see in code… So – if you have anything to add – please don’t hold back ;-). That’s why I have a comment section ;-).

New logo for Microsoft Dynamics 365 Business Central

$
0
0

Small message that I just needed to share :-).  As said – I’m prepping my sessions for Directions, which basically means: I’m spending all my free time in “Business Central” these days.  No printing, hardly any social contacts … . 

And while publishing to SaaS, I noticed this when I refreshed my page:

That’s right!  A new logo, people!  (I should say “Icon” actually, but you know what I mean).  Let’s have a closer look:

Doesn’t look bad! Not bad at all .. But it does mean I’ll have to redesign my 3D printed logo ;-).  And I will … I so will .. .  As I said on twitter earlier today: I’m so distracted now that I have to first make sure that I started a new print with a new concept before I can continue prepping my sessions :-).

Microsoft Dynamics 365 Business Central 2019 release Wave 2 is released!

$
0
0

Sorry for the shortness of this blog (may be you like it that way ;-)) – but just a small reminder for anybody that has been sleeping under a rock for the last couple of days:

Microsoft Dynamics 365 Business Central 2019 release Wave 2 is released!

All you need to know is simply quite well documented by Microsoft.  Let me give you a few links:

And yes, that’s right: C/AL is gone, and the RTC is gone as well!  But together with that, a lot of goodies are being thrown in our lap!  If you want to know more, read the above links, or come to Directions or NAVTechDays and learn from the very people that built it!

It’s a big one – so build pipelines will break, code needs to be upgraded.  I guess it’s time for action ;-). 

More to come ..

Insufficient stack to continue executing the program safely

$
0
0

The better half of my past week can be summarized best by this oh-so-descriptive-error-message:

Right: a message I have spent a long time on to find out what was happening – and what caused it.  Multiple days – so let me try to spare you the pain when you would encounter this error.

(tip: if you don’t care about the story, just skip to the conclusion ;-)).

History

We are rebuilding our product to Business Central – and are almost finished.  In fact, we have spent about 500 days building it – and since the recent release of Wave 2, we are fully in the process of upgrading it – because obviously, since all is extensions (we have a collection of 12 dependent extensions), that should be easy.  (think again – Wave 2 came with a lot of breaking changes… but that’s for another blogpost ;-)).

Symptoms

Our DevOps builds had been acting strange for a while – just not “very” strange .. .  In fact: when a build failed with a strange error (yep, the above one), we would just retry, and if ok, we wouldn’t care.

That was a mistake.

Since our move to Wave2 .. the majority of the builds from only 1 of the 12 apps failed – and even (what never happened before), the publish from VSCode failed as well with the same error message:

Insufficient stack to continue executing the program safely. This can happen from having too many functions on the call stack or function on the stack using too much stack space.

We are developing with a team of about 9 developers – so people started to NOT being able to build an environment, or compile and publish anymore.  Sometimes. 

Yes indeed: sometimes.  I had situations where I thought I had a fix, and after 10 builds or publishes – it started to fail again.

And in case you might wonder – the event log didn’t show anything either.  Not a single thing.  Except from the error above as well.

What didn’t help

I started to look at the latest commits we did.  But that was mainly due to the upgrade – stuff we HAD to do because of the breaking changes Microsoft introduced in Wave 2.

Since it failed at the “publish” step, one might think we had an install codeunit that freaked out.  Well, we have quite a few install-codeunits (whenever it makes sense for a certain module in that app) .. I disabled all of them – I even disabled the upgrade-codeunits.  To no avail.

Next, I started to look at the more complex modules in our app, and started to remove them .. Since one of the bigger modules had a huge job during install of the app – AND it publishes and raises events quite heavily, I was quite sure it was that module that caused the pain.  To test it, I removed that folder from VSCode, made the code compile .. and .. things started to work again.  But only shortly.  Moments later, it was clear in DevOps that certain builds started to fail because of the exact same error.  From victory .. back to the drawing board ;-).

Another thing we tried was playing with the memory on our build agents and docker hosts.  Again, to no avail .. that absolutely didn’t help one single byte.

And I tried so much more .. really.  I was so desperate that I started to take away code from our app (which we have been building for over 6 months with about 9 developers (not fulltime, don’t worry ;-)).  It’s a whole lot of code – and I don’t know if you ever tried to take away code and make the remaining code work again .. it takes time :-/.  A lot!

What did help

It took so much time, I was desperately seeking help .. and from pure frustration, I turned to Twitter.  I know .. not the best way to get help .. but afterwards, I was quite glad I did ;-).

You can find the entire thread here:

First of all: thanks so much for all of the people for their suggestions.  There were things I didn’t try yet.  There were some references to articles I didn’t find yet.  All these things gave me new inspiration (and hope) .. which was invaluable!  Translation files, recursive functions, event log, dependencies, remove all code, force sync, version numbers, …

Until phenno mentioned this:

Exactly the same error message, with a big xmlport.  It first pointed me to the wrong direction (recursive functions / xmlport) ..

But after one of our developers remembered me that from months back, we also had a big object: A 1.2Mb codeunit, auto generating all Business Central icons as data in a table, to be able to use them as icons in business logic.  Initially I didn’t think it would ever have an effect on the stability of the app (in this case – the inability to publish it) .. we wrote the damn thing more than 4 months back, for crying out loud :-/ and the code was very simple – nothing recursive, no loops, very straight forward.  Just a hellofalot of code ;-).  But .. It doesn’t hurt to try what it would do when I would remove the code .. so I tried .. and it works now!  Victory!

Conclusion

The size of a file (or object) does matter.  If you have the error above – it makes sense to list your biggest files, and see if you can make them smaller by splitting the objects in multiple (if possible.. ). 

While in our case, it was one huge object in one file.  And I don’t know what exactly was the problem: the size of the file, or the size of the object.  There is a difference.  If I wanted to have kept the functionality, I might have had to split the object in multiple codeunits, and on top of that I might have had to split them in multiple files (which – in my honest opinion – is best practice anyway..).

Also, I have the feeling that Wave 2, is a bit more sensitive to these kind of situations.. I don’t know.  It’s just – we had this file for quite a while already, and it’s just with the upgrade to Wave2 that it started to be a problem.

In any case – I hope I won’t wake up tomorrow, concluding the error is back and all the above was just one pile of crap.  Wish me luck ;-).

My NAVTechDays

$
0
0

I got quite a week ahead of me .. .  Not only will I host one session and some workshops .. I will actually host 2 sessions, 2 workshops and an ISV session this year.  What did I get myself into?

No repeats!

If you look at my session schedule, and you have visited Directions EMEA, well, you might wonder if I’m “just” redelivering content at NAVTechDays.  Well .. No!  Totally not, actually. Without giving away anything – let me try to explain …

Development Methodologies for the future (Thursday 11:30 – 13:00)

First of all, if you attended my session at Directions, you noticed that there, I prepared actually 3 sessions, and the audience chose the topic of that particular session.  I was lucky that all three topics that I prepared were quite equally popular while the audience was voting – so I would be stupid to do just a repeat.  No, I will actually slice a completely different topic as I did on Directions EMEA.  All new content – and more ;-).  More details Thursday at 11:30 during my session ;-).

{Connect App}² (Friday 11:00 – 12:30)

My session with Vjeko at Directions was “Connected Apps” .. .  This one is “Squared” ;-).  Which means: more!  Much more! So, If you attended that one on Directions, and you thought we took it “far” – well – think again!  Just to say, also this one is not a repeat.  How could it be?  At Directions, there were only 45 minutes ;-).

Workshops

Also this year, I will be hosting workshop during the predays, which I always look forward to.  I just hope that the internet will be good, because I will be quite dependent on it ;-).  I prepare individual Azure VMs for every attendee to make it as comfortable as at all possible .. but that means: internet! ;-).  What I will be doing is something I have been doing quite a lot …

Developing Extensions for Microsoft Dynamics 365 Business Central – Introduction (Tuesday)

For the people that are putting their first steps into AL development.

Developing Extensions for Microsoft Dynamics 365 Business Central – Advanced (Wednesday)

For the people that already have put their first steps .. but still feel they need some guidance for the “stepping” to feel comfortable (if that’s an explanation at all ;-)).

ALOps (Thursday 15:40 – 15:55)

And if that’s not enough .. I’m doing yet another session .. . This one is an ISV session for the product we have been working so hard on to get to the community: ALOps.  We are a Platinum sponsor, which comes with an ISV slot – and I’m looking forward to speak to all people that are interested in “doing decent DevOps for AL easily” ;-).  We will obviously also have a booth at the EXPO – please come by!  We have stickers ;-). And we might just get your pipeline upt and running .. during the conference ;-).

In total, that means I have about 19 hours and 15 minutes of content to deliver on NAVTechDays … .  Again .. what did I get myself into :-/.

NAVTechDays 2019 – Final thoughts

$
0
0

It’s over – the week I’m always looking forward to for so long passes by in a blink of an eye: NAVTechDays.  As Vjeko already shared the goodies– I will do so as well – joined with my final thoughts and some pictures ;-).

I feel myself old and repetitive by saying this conference is “something else”.  Just imagine: quality food: morning, noon and evening, quality (recorded) sessions – all available to the community, 90 minute deep-dive topics, in quality seats, with quality media equipment, and quality speakers (uhum ;-)) – all captured by a quality photographer.  Quality! That’s NAVTechDays: no dime is spared to provide THE BEST conference experience anyone could wish for.  From start to finish!  Unmatched on any level. This year, there was even a hairdresser, I kid you not ;-).

If you don’t believe me: here is the Photo Album of this year’s edition!  And if you think this is the only quality edition?  Well – think again – here are all albums of all editions!

All sessions are recorded, and already available on youtube and on mibuso.  If you have about 28.5 hours to spare – you can find all videos here.

My edition

As predicted, my NAVTechDays was a bit too busy.  So much content in only a week .. I have to admit – it’s simply too much.  I probably won’t do that again – and if I do – I’ll at least have this blogpost to hold onto to declare myself crazy .. again ;-).

One special thing I was really happy to be able to do: I got my parents into my session. That’s a special feeling, I can tell you that. You always try to explain what you do, and what impression it has on you – but they can only understand when they actually experienced it :-).

I can’t judge if my sessions were well received.  Thing is – I realize that the topics I talk about – the opinions I evangelized are not always the opinions that are shared by all of you.  Like “Code Customized AL” or “embracing dependencies” to just name a few. I know people that are passionately for code customizing AL – and who are passionately against any form of dependencies.. .  Well, I realize that it can have its effect on how the session is being received (like: complete bullshit ;-)).  All I can do, is share my experience, and what I believe makes sense to go forward .. and I still stand by 100% I have been advocating ;-). And yes – in the “real world”.

In any case … as said, you can find my sessions on mibuso and on youtube here:

NAV TechDays 2019 – Development Methodologies for the future

And the session with Vjeko about connect apps:

NAV TechDays 2019 – {Connect app}²

In the next weeks/months, these videos will also be turned into a series of blogposts.  I already planned a few – and Vjeko is already blogging is ass off as well .. .  Expect a lot, soon (or late – no promise ;-)).

All there’s left for me to say is: thank you!  Thank you for joining my session, thank you for joining my workshops, thank you Luc, for making this happen for all of us – it’s a real honor to be a small part of it!  Thank you, Vjeko, my bro, for sharing the stage with me :-).  Awesome week!

Picture time!

waldo.restapp

$
0
0

For a few weeks now, people have been asking “can I have the restapp you were showing” – well, here it is: https://github.com/waldo1001/waldo.restapp

But that would be not much of a blogpost – would it ;-).

Just an example…

During NAVTechDays, I did 2 sessions:

In both sessions, I talked about the concept of “dependencies”. Yes indeed – in my opinion, “dependencies” is an opportunity that we should embrace .. (just watch the “Development Methodologies” session if you want to know how and why). Now, during the sessions, the RESTApp was actually just an example on how we internally “embrace” the concept.

What does it do?

Well .. not much, really. At least if “making your life a lot easier” is “not much”, that is ;-).

It just “encapsulates” the complexity, the functionality, the responsibility, the “whatevery” that has to do with “REST Calls”.

I mean, did you ever try to use the httpclient-wrappers to do any kind of “decent” webservice call? Did you ever fumble with setting the “content-type”, or any kind of headers? And honestly – did you spend more than 5 minutes to make it work? Well, if all answers to these questions are “yes” .. Then you will appreciate this app ;-).

Or as a picture tells you more than an 1000 words, it turns this code:

Into:

I hope you agree that this second way of handling the same call, is a LOT easier:

  • Not having to declare all these different http-types, just the RESTHelper codeunit from the RESTApp.
  • Not having to care about the order of adding content type or headers or anything like that
  • Not caring about error handling.

The current functionality includes the following things:

  • A helper codeunit for REST calls
  • A helper codeunit for JSON stuff
  • Logging of the request and the response message

I could have done that with a simple codeunit, dude

I hear you. “Why an App? Why implementing the complexity of dependencies for such a simple thing?”

Well, it’s not just there to make your coding easier. It’s there to make the whole lifecycle of how you do webservice calls in all your projects easier. Just think about it. You are probably going to do REST calls in many different products and/or implementations. And many of them are different, need more functionality, …

Or – you just might run into a bug on how you do it, and have to update a bunch of other implementations .. .

Or – at some point, you might figure that having a decent logging for all outgoing REST calls would be interesting (and let me tell you that: it IS interesting (and yes, it’s already included in this app))! If you have implemented a RESTApp like this, a simple update gives you this new functionality on ALL your projects. Simply release it to all your customers (long live DevOps!). You can update all you want .. as many times as you want.

Or – at some point, you need to be able to set up “safe” sandboxes, and need to overrule all webservice-calls in a sandbox to not risk “live” calls from a sandbox (guess what – this IS something to think about!)? Just update this app, deploy, done! On ALL your customers.

I can give you lots of scenarios, to be honest.. . But tell me again – how is that codeunit going to help you in any of this?

Just an example

I know, I already had this subtitle :-). But now, it’s more like a disclaimer.

Don’t see this code as being a leading app. It’s meant as an example .. and nothing more. It is not the version we are using internally (which I’m not allowed to share, as I don’t own the code). It doesn’t have “authentication helpers” or anything like that. And it probably doesn’t have all the functions that are necessary to do all kinds of REST calls.  Obviously, this is where you can come in :-). May be it’s not a “leading app” now (if that’s an expression at all) .. you can help me make it one ;-). Please feel free to do any kind of pullrequest. Anything that might help the community. Change, restructure, .. whatever!

May be, at some point, it’s mature enough to pullrequest into Microsoft’s System Application ;-). In this current state, in my opinion, it isn’t.

Ideas

I do have some ideas that I want to include in this. Like making it easier to work with different authentication-types. Or including a way for a test-url, like request-bin, that replaces all calls with this url for you to be able to track the actual request calls that are being generated.

If you have ideas, or remarks, or anything, you can always leave a comment – or use github and use the “issues” section to add ideas (or issues).

Enjoy!


Microsoft Dynamics 365 Business Central (OnPrem) Production environment on Docker?

$
0
0

Well .. no!  “Not yet”, at least ;-).

Let me be clear: this post is NOT a recommendation that you should use Docker for your OnPrem Customer’s production environments.  Not at all.  This is merely a blogpost about the fact that I wouldn’t mind Microsoft to officially support Docker as an alternative to an NST deployment.

If you don’t care about the “why” below – just upvote here ;-): https://experience.dynamics.com/ideas/idea/?ideaid=daf36183-287e-e911-80e7-0003ff689ebe

“Continuous Upgrade”

Just imagine you would be able to continuously upgrade your customers.  This has actually quite an impact on your daily life .. on anything that involves a life of a partner: from support to customer deployment to hotfixing, to release management, … . 

Let me give you a few examples – and I’ll do that with some extreme numbers: either we have 300 customers, all on a different versions – or we have 300 customers all on the same version:

Product releases

In a way, you need to be able to support all product releases on all versions of Business Central (or NAV) that you have running at your customers – it doesn’t make any sense to support a version that isn’t running at any customer, does it ;-)?  If a customer is running v13 of your product, you need to be able to hotfix it, and roll out the fixes to one or more customers with that same version. 

Even more – not only, you’d have to keep track of all the versions/releases/customers – you need to manage the hotfixes, and bump it to all versions/releases necessary (a hotfix in v13 might be necessary in 14, 15, .. as well) .

On the other hand – if everyone would be on the same (and thus latest) release: everyone can be treated the same, and hotfixing is easy, rollout is easy, administration is easy.  Simply because there is only one product release to maintain (you start to get why Microsoft is pushing us to the cloud, right? ;-)).

In order to facilitate this in Git/DevOps, one way is to create (and maintain) release-branches for all supported releases.   On top of this, you have to maintain for all these  branches a dedicated branch policy, build pipeline, artifact feed and what not .. .  Good luck doing that for 300 different versions.. .

Support

I think we can all agree that our support department would be so much relieved if they would only have to support 1 version/release, right?  All bugfixes/improvements/features/tooling/… are just there.

Bottom line

The easier we are able to upgrade a customer to the next runtime of Business Central.. the more customers WILL be on the latest Business Central and version of our product .. the easier it is to manage our product .. The easier it is to support our product .. The easier our life is.  It’s a simple fact.  No science needed here …

Upgrading an OnPrem customer

You might know my opinion on “code customizing AL” – if not, you can find it in this post.  In a way – for me – “code customizing AL is evil” ;-).  So .. In that perspective, I’m going to assume we are all on extensions/apps .. and all we have to do is manage apps at customers.

In terms of upgrading – we would upgrade apps to new version of apps, which is quite easy to do.  You can prepare all upgrades in upgrade codeunits, so in a way, when prepped correctly, upgrading is just a matter of installing a new app the right way (by triggering the upgrade routine).  I will not go into how to do this.

But that’s not all …

We also have to upgrade the platform, the runtime.  Not the client anymore (thank god ;-)), but still all the NST’s and other binaries we have installed.  At this point, it’s still quite manual: “inserting DVD and start clicking”.  I know it’s scriptable .. heck, I even created a function once to “easily” upgrade an existing installation by calling the “repair” option from the DVD (you can find the script here), but honestly, in a world with Docker …

The Docker Dream

Just imagine – all you do to install an OnPrem Business Central customer is to install a real SQL Server for the database, and use the docker images provided by Microsoft for the NST.  Why only the NST?  Well, that’s the part that needs to be upgradable like every single month. 

But when on Docker, you know how easy it is to set up a new environment, right?  How easy would it be to upgrade, to set up UAT environments in other versions, to “play” with localizations, .. .  Well, as easy as we know already by using Docker – but applying this to a production environment would really eliminate the complexity to upgrade continuously.

Honestly, I think this is the missing link to be able to implement full “continuous upgradability” for OnPrem customers. 

We already do this …

Call me nuts – but for our internal database, which is completely our own concern, we already have this running as a proof-of-concept.  And it has been running for so many months without one single problem :-).  I shouldn’t say this, but it has been making upgrading and maintaining this particular environment (with +20 apps) so much easier that we are really wondering “why not” for customers.  We won’t, obviously, but still … we dream ;-).

Vote!

If you agree with me, then you also agree with Tobias Fenster, who has created an idea on the ideas-site which you can upvote – please do!  If you don’t understand a single thing about Docker or what impact it could be for us – than just take my word for it and still upvote it here: https://experience.dynamics.com/ideas/idea/?ideaid=daf36183-287e-e911-80e7-0003ff689ebe

Microsoft Dynamics 365 Business Central Virtual Event, June 3rd, 2020

$
0
0

It was quite expected, I guess.  After alle the cancellations of Business Central conferences, like NAVTechDays, Directions, Days of Knowledge, .. , Microsoft announced today that they will host a first “Virtual Conference” called “Microsoft Dynamics 365 Business Central Virtual Event” and it will be held on June 3rd, 2020.

The content will be 16 pre-recorded sessions that will be available (on-demand) for 12 months:

  • What’s new: Dynamics 365 Business Central modern clients– part 1
  • What’s new: Dynamics 365 Business Central modern clients– part 2
  • What’s new: Visual Studio code and AL language
  • Managing access in Dynamics 365 Business Central online
  • Managing customer environments in Dynamics 365 Business Central online
  • What’s new: Dynamics 365 Business Central application
  • Overview: Dynamics 365 Business Central and Common Data Service integration
  • Interfaces and extensibility: Writing extensible and change-resilient code
  • Dynamics 365 Business Central: How to avoid breaking changes
  • What’s new: Dynamics 365 Business Central Server and Database
  • Dynamics 365 Business Central: Coding for performance
  • Deep dive: Partner telemetry in Azure Application Insights
  • Dynamics 365 Business Central: How to migrate your data from on-premises to online
  • Migrating data from Dynamics GP to Dynamics 365 Business Central online
  • Dynamics 365 Business Central: Your latest demo tools and resources
  • Introducing SmartList Designer for Business Central (this session will be published later – expected in July)

I have no idea what het user experience will be for a conference like this – but let’s find out and register here: https://aka.ms/virtual/businesscentral/2020RW1

And mark your agenda: June 3rd, 2020!

Getting not-out-of-the-box information with the out-of-the-box web client

$
0
0

A few days ago, I saw this tweet:

And that reminded me about a question I had a few weeks ago from my consultants on how to get more object-formation from the Web Client.  More in detail: in Belgium, we have 2 languages for a tiny country (NLB, FRB) that differ from the language used by developers (ENU).  Meaning: consultants speak another language than the developers, resulting in misunderstandings.

I actually had a very simple solution for them:

The Fields Table

For developers, a well known table with information about fields.  But hey, since we can “run tables” in the web client (and since this is pretty safe to do since these are not editable (and shouldn’t be – but that’s another discussion :D)), it was pretty easy to show the consultants an easy way to run tables.  It’s very well described by Microsoft on Microsoft Docs.  Just add “table=<tableid>” in the URL the right way, and you’re good to go.  So for running the “Fields table”, you could be using this URL: https://businesscentral.dynamics.com/?table=2000000041

And look at that wealth of information:

  • Data types
  • Field names
  • Field captions depending on the language you’re working in
  • Obsolete information
  • Data Classification information
  • ..

All a consultant could dream of to decently describe change requests and point developers to the right data, tables and fields.

This made me wonder though:

And can we easily even more from the web client?

Not all of the Business Central users, customers, consultants, … are developers.  So, can we still access this kind of information, without the access to code, VSCode or anything like that?

Yes we can. 

In fact, the starting point should be: how do I find objects?  Is there a list with objects?  And therefore also a list with these so-called system tables?

Well, you’ll need to …

learn how to find “AllObj”, and you’ll find it all!

AllObj is a system table that houses all objects (including the objects from Extensions), so if you go to this “kind of” url, you’ll find all objects in your system:

https://businesscentral.dynamics.com/?table=2000000038

You’ll see a very simple list of objects, and you can even see the app (package Id) it belongs to (not if that is important though …):

So – now you know how to find all objects and how to run objects.  You can run tables, reports, queries and pages, simply by constructing the right URL (pretty much the same as explained here).

System/Virtual tables

To find these special tables with system information, simply filter the “AllObj” table on “TableData” and scroll down to the system tables number range (ID range of 2.000.000.000 and above) and start browsing :-).  You’ll see that you don’t always have permission to read the content .. but if you do, you’d be surprised of the data that you can get out of the system.

Just a few pointers

Session information https://businesscentral.dynamics.com/?table=2000000009
All Objects https://businesscentral.dynamics.com/?table=2000000038
Fields https://businesscentral.dynamics.com/?table=2000000041
License Permission https://businesscentral.dynamics.com/?table=2000000043
Key https://businesscentral.dynamics.com/?table=2000000063
Record link https://businesscentral.dynamics.com/?table=2000000068
API Webhook Subscription https://businesscentral.dynamics.com/?table=2000000095
API Webhook Notification https://businesscentral.dynamics.com/?table=2000000096
Active Session https://businesscentral.dynamics.com/?table=2000000110
Session Event https://businesscentral.dynamics.com/?table=2000000111
Table Metadata https://businesscentral.dynamics.com/?table=2000000136
Codeunit Metadata https://businesscentral.dynamics.com/?table=2000000137
Page Metadata https://businesscentral.dynamics.com/?table=2000000138
Event Subscription https://businesscentral.dynamics.com/?page=9510

What if I get an error?

Well, that happens – like this one:

I don’t know why it does that – but do know you can always turn to a developer, that can try to apply the C/AL trick: just create a page in an extension and add all fields from the table and simply run that page.

Deploying from DevOps the right way: enabling External Deployment in OnPrem Business Central environments

$
0
0

It seems that lately, I’m only able to blog about something when I have a tool to share.. . That needs to change .. :-/. But not today. Today, I’m going to share yet another tool that we at the ALOps-team have been working on to serve many of our customers. And we decided to share it with the community. The tool is free, it will stay free … but, there is a …

Disclaimer

The tool is a hack, nothing more than that. We simulate the behavior of what we think happens on the SaaS environment when you upload an Extension through the Automation API. So, the tool is “as is”, there is no official support other than me not want you to suffer problems with it ;-). There is a github where you can share your feedback.
The tool is dependent on how Business Central will evolve in this matter – and we hope this will “just work” for many updates to come. It will work on an decent install of Business Central. Not on any kind of Copy/Paste or other non-standard installation.

Deploying extensions from DevOps, without a build agent at the customer site

The official statement from Microsoft for deploying apps from DevOps to your OnPrem Customers is: install a DevOps build agent. As you might know, build agents sometimes act not the way you want – and having to maintain a bunch on infrastructure that is not 100% under your control, isn’t something that you want either. Customers might install a windows update, or .. do whatever that makes your release pipeline not run anymore…

But what if…

.. we could just enable the Automation API (because, as you know, there is an ability to publish extensions with it) for OnPrem customers, and use that in our DevOps for our CD pipelines?
Well .. using the Automation API to publish an extension, is quite the same as using the “Upload Extension” action on the “Extension Management” page in Business Central:

Thing is – that doesn’t work OnPrem. So in a way – the “Upload Extension” functionality in the Automation API doesn’t work OnPrem either. The action simply isn’t available. And if you would run page 2507 (which is the upload wizard page) manually, it would simply show you the following message when you would try to upload an extension:

So – the question is .. How do we enable “External Deployment”.
Well, it’s just a setting on the Server Instance, by pointing to some kind of API Endpoint that the NST will call when anyone would upload an extension.

ALOps.ExternalDeployer

So, we created a PowerShell module, that makes it pretty easy to enable the External Deployer on any OnPrem environment. In fact, with 4 lines of PowerShell, you’ll have it up-and-running! Run this PowerShell on the environment that is running your NST where you would like to deploy to.

1. Install ALOps.ExternalDeployer: this will install the PowerShell module on the machine

install-module ALOps.ExternalDeployer -Force

2. Load the module: this will simply load the necessary commandlets in memory:

import-module ALOps.ExternalDeployer 

3. Install the External Deployer: this will install an agent that will take care of the app-publish and install whenever you upload an app through the Automation API, or the upload page.

Install-ALOpsExternalDeployer 

4. Link the ExternalDeployer to right NST: it will update and restart the NST with the settings needed for the External Deployer.

New-ALOpsExternalDeployer -ServerInstance BC

Done!

The easiest way to test it is to simply upload an extension through the Upload Extension wizard in Business Central. Thing is, in Business Central, the page isn’t accessible, but you can easily run any page by using the parameter “?page=2507” in the Webclient URL.
So – just run page 2507 to upload an Extension. Now, you’ll get this message:

That’s looking much better, isn’t it?
Next, since the “Deployment Status” isn’t available either from the “Tell Me”, you can also run that page by providing this parameter in the url: “?page=2508“.
Even if the upload would have failed, you get information in the page, just like you would in Business Central SaaS:

AND you can even drill down:

So .. It works! And this also means it will work through the Automation API. You can find all info on how to do that here: https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/itpro-introduction-to-automation-apis

And if you would like to do that with ALOps …

Well, pretty easy. There is an ALOps step “ALOps Extension API“, which has all necessary parameters to deploy. Just provide the right parameters, like:

  • API Interaction: Batch Publish (if you’d like to publish more than one extension at the same time)
  • API Endpoint
  • API Authentication

And you’re good to go! Here’s an example of one of our pipelines:

In our company, it’s all we use today. All deployments to all customers are using this external deployer. So rest assured – it’s well tested and very much approved ;-).
Enjoy!

Deploying from DevOps the right way (Part 2): Deploying to OnPrem Business Central environments with the automation API

$
0
0

You might have read my previous blogpost on how to enable the “external deployment” in an OnPrem Business Central environment. Well, that post deserved an “extension” as I didn’t provide examples on how to deploy with PowerShell – which you would be able to do within Azure DevOps.

Scenario

The scenario is still the same: you have all these OnPrem customers, where you would like to deploy your apps to. Microsoft is clear: just install a DevOps agent on all those customers. The alternative that I try to give is – well, don’t install the devops agent, but just make “External Deployment” possible by following my steps in my previous post, and use the Automation API, just like you would for Business Central SaaS. Sidenote: this API needs to be accessible from your company. So we make sure that the customer enables our IP in their firewall to be able to access this automation API directly from our location.

PowerShell

Since I don’t use PowerShell in DevOps (suplise, suplise), I created an example-script for you in my repo here: https://github.com/waldo1001/Cloud.Ready.Software.PowerShell/blob/master/PSScripts/DevOps/DeployWithAutomationAPI.ps1
Just a few things worth to mention:

  • It’s good to have the app.json and the app-file in the artifacts, to be able to easily get the details about the app being released
  • The publish is just a matter of streaming the file in the request
  • Notice I’m using the “beta” version of the API. I was able to publish the extension with v1.0, but I wasn’t able to get the deployment status – only through the beta-version. Since this is an unsupported way of deployment, I don’t think I can ask Microsoft to help me on this ;-).
  • You would be able to loop the call about the deployment progress, to see if it was successful or not – basically a loop until the status says “completed” or “failed”.

The main part here obviously is the Patch-method to upload the extension. The external deployer you installed will do the rest.. .

ALOps

As said, I don’t use PowerShell anymore, because I’m using ALOps, just because it so much more hassle-free .. and we see that many people are starting to use ALOps as well, also for community purposes. Nice! This means community projects are also getting decent build pipeline in stead of none – and it’s free, so why not ;-).
In ALOps, we created the “ALOps Extension API” step, which you can use to publish an extension through the Automation API OnPrem. The easiest way to do that is by simply introduce one step, and set the “Batch Publish” interaction. Basically it will get all app-files you selected as artifacts, figure out the order to publish for you, and install all artifacts that you have set up in your release-step. Easy peasy. It doesn’t care if it’s in Docker or not .. If the endpoint is available, the external deployer is installed, then your publish will work. Here is the setup in the classic editor which releases 22 apps – one simple step, with only 1 real parameter to fill:

Or in yaml:

 steps:
 - task: hodor.hodor-alops.ALOpsExtensionAPI.ALOpsExtensionAPI@1
   displayName: 'Batch Publish'
   inputs:
     interaction: batch
     api_endpoint: 'https://xyz.infra.ifacto.be/bc/api'
     authentication: windows 

Microsoft Dynamics 365: 2020 release wave 2 plan

$
0
0

That’s right. It’s time again for the next round of features that Microsoft is planning for the next major release. It’s weird this time, lacking most info from conferences .. the kind of “silent” release of Wave 1 .. it’s almost like flying blind. Although, there is a crapload of information online. And of course, don’t forget Microsoft’s Virtual Conference from June 3rd. 

Since I’m still focusing on Business Central – I’m only going to cover that part .. but do know that the entire “Dynamics 365” stack has a release for Wave 2.   Business central-related information can be found here: https://docs.microsoft.com/en-us/dynamics365-release-plan/2020wave2/smb/dynamics365-business-central/planned-features

As it doesn’t make sense to just name all features (as they are all listed on the link above), I’m just going to talk again about the features I’m looking forward to (and why) – and the ones that I’m kind of less looking forward to.

What am I looking forward to?

As always – most probably this is going to be somewhat tech-focused .. sorry .. I am what I am, I guess.

Service-to-service authentication for Automation APIs

Very much looking forward to that – just because of the possibilities that we’ll have with DevOps, because at this point, supporting a decent release flow in DevOps to an environment that is fully “Multi Factor Authentication” – well – that’s a challenge. For me, this has a very high priority.

Support for an unlimited number of production and sandbox environments

Today, business can only be in three countries, because we can only create 3 production environments. That obviously doesn’t make sense – so absolute a good thing from Microsoft to open this up! Next to that…

Business Central Company Hub extension

That sounds just perfect! It seems they are really taking into account that switching companies is not a “per tenant” kind of thing, but really should be seen across multipole tenants.

It seems it’s going to be built into the application, within a role center of a task page.  At some point, Arend-Jan came with the idea to put it in the title bar above Business Central like this:

Image

Really neat idea that I support 100% :-). As long as it would be across multiple tenants/localizations .. :-). May be as an extension on the Company Hub? Who knows.. . Any solution, I’m looking forward to!

I couldn’t find the extension in the insider-builds – so nothing to show yet.. .

Business Central in Microsoft Teams

Now, doesn’t THAT sound cool? Because of the COVID-19 happenings, our company – like many other companies out ther – has been using Teams a lot more than they were used to. And the more I set up Teams, the more I see little integrations with Business Central could be really useful!

What exactly they are envisioning here, I don’t know, but the ability to enter timesheets, look up contact information to start a chat or call or invite or… . Yeah – there are a lot of integration-scenarios that would be really interesting.. .

Common Data Service virtual entities

I’m not that much into the Power-stuff (fluff?) just yet, but I can imagine that if I would be able to expose my own customizations, or any not out-of-the-box entities to CDS, that it would be possible to implement a lot more with Power Apps and other services that connect to the CDS entities.

Performance Regression and Application Benchmark tools

One of the things we are pursuing is the ability for DevOps to “notice” that things are getting slower. This means that we should be able to “benchmark” our solution somehow. So I’m looking forward diving into these tools to see if they can help us achieve that goal!

Pages with FactBoxes are more responsive
Role Centers open faster

These are a few changes in terms of client performance – and what’s not to like about that ;-). I have been clicking through the client, and it definitely isn’t slower ;-). I also read somewhere that caching of the design of the pages is done much smarter .. even across sessions, but I didn’t seem to find anything that relates to that statement here on the list.

On-demand joining of companion tables

So so important.  Do you remember James Crowter’s post on Table Extensions?  Well, one of the problems is that it’s always joining these companion tables.  I truly believe this can have a major impact on performance if done well.   

Restoring environments to a point in time in the past

I have been advocating strongly against “debug in live” – well, this is one step closer to debugging with live data, but not in the production environment. Also this is a major step forward for anyone supporting Business Central SaaS!

Attach to user session when debugging in sandbox

Sandboxes are sometimes used as User Acceptance Test environments. In that case, multiple users are testing not-yet-released software, and finally, we will be able to debug their sessions to see what they are hitting.

Debug extension installation and upgrade code

Finally! I have been doing a major redesign of our product, and would have really enjoyed this ability ;-). Nevertheless, I’m very glad it’s finally coming! No idea how it will work, but probably very easy ;-).

What am I not looking forward to?

Well, this section is not really the things I don’t like, but rather the things I wasn’t really looking forward to as a partner/customer/.. . I don’t know if it makes any sense to make that into a separate section .. but then again .. why not. It actually all started with something that I really really hated in one of the previous releases: the ability to go hybrid / customize the Base App. And I kept the section ever since ;-). So .. this is the rest of the list of features we can expect:

Administration

Application

Migrations to Business Central Online

Modern Clients

Seemless Service

General

I have the feeling not everything is included in this list, honestly. There isn’t much mentioned on VSCode-level, while we know there is going to be quite some work in the “WITH” area .. . And we expect to have “pragmas” in code available in the next release as well – or so I understood. That’s just a couple of things you could see in the session “Interfaces and extensibility: Writing extensible and change-resilient code” session of the recent Virtual Conference of Microsoft. 

Installing a DevOps Agent (with Docker) with the most chance of success

$
0
0

You might have read my previous blog on DevOps build agents. Since then, I’ve been quite busy with DevOps – and especially with ALOps. And I had to conclude that one big bottleneck keeps being the same: a decent (stable) installation of a DevOps Build server that supports Docker with the images from Microsoft. Or in many cases: a decent build agent that supports Docker – not even having to do anything with the images from Microsoft.
You probably have read about Microsoft’s new approach on providing images being: Microsoft is not going to provide you any images any more, but will help you in creating your own images – all with navcontainerhelper. The underlying reason is actually exactly the same: something needed to change to make “working with BC on Docker” more stable.

Back to Build Agents

In many support cases, I had to refer back to the one solution: “run your Docker images with Hyper-V isolation”. While that solved the majority of the problems (anything regarding alc.exe (compile) and finsql.exe (import objects)) .. in some cases, it wasn’t solving anything, which only has one conclusion: it’s your infrastructure: version of windows and/or how you installed everything.

So .. that made me conclude that it might be interesting to share with you a workflow that – in some perspective doesn’t make any sense – but does solve the majority of the unexplainable problems with using Docker on a Build Server for AL Development :-).

Step 1 – Install Windows Server 2019

We have best results with Windows Server 2019 as it’s more stable, and is able to use the smaller images for Docker.

Step 2 – Full windows updates

Very important: don’t combine docker/windows updates and such. First, install ALL windows updates and then reboot the server. Don’t forget to reboot the server after ONLY installing all Windows updates.

Step 3 – Install the necessary windows features

So, all windows updates have applied and you have restarted – time to add the components that are necessary for Docker. With this PowerShell script, you can do just that:

Install-WindowsFeature Hyper-V, Containers -Restart

You see – again, you need to restart after you did this! Very important!

Step 4 – Install Docker

You can also install Docker with a script:

Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Confirm:$false -Force
Install-Module DockerProvider -Confirm:$false -Force
Install-Package Docker -RequiredVersion 19.03.2 -ProviderName DockerProvider -Confirm:$false -Force
  

You see, we refer to a specific version of Docker. We noticed not all versions of Docker are stable – this one is, and we always try to test a certain version (with the option to roll back), in stead of just applying all new updates automatically. For a build agent, we just need a working docker, not an up-to-date Docker ;-).

Step 5 – The funky part: remove the “Containers” feature

What? Are you serious? Well .. Yes. Now, remove the Containers feature with this script and – very important – restart the server again!

Uninstall-WindowsFeature Containers

Restart-Computer -Force:$true -Confirm:$false

Step 6 – Re-install the “Containers” feature

With a very simpilar script:

Install-WindowsFeature Containers 
Restart-Computer -Force:$true -Confirm:$false

I can’t explain why these last two steps are necessary – but it seems the installation of Docker messes up something in the Containers-feature, that – in some cases – needs to be restored.. . Again, don’t forget to restart your server!

Step 7 – Disable Windows Updates

As Windows updates can terribly mess up the stability of your Build Agent, I always advice to disable them. When we want to apply windows updates, what we do, is just execute the entire process described above again! Yes ineed .. again!

That’s it!

If you ask yourself – is all this still necessary when we moved to the new way to work with Docker: when we build our own images and such. Well – I don’t know, but one thing I do know: the problems we have had to solve were not all related to the Business Central Images – but some just also regarding just “Docker” and the way Docker was talking to Windows .. (or so we assumed). So I guess it can’t hurt to try to find a way to setup your build servers that way that you know it’s just going to work right away.. . And that’s all what I tried to do here ;-).


New Upcoming Conference: DynamicsCon

$
0
0

I just wanted to raise some attention to a new Conference in town: DynamicsCon.

Quite interesting, because it’s perfectly aligned with the current world situation regarding COVID-19 issues: it’s a virtual event .. and it’s free! I’m not saying I prefer virtual events. I don’t. But given the circumstances, I guess it makes sense – and some advantages as well: you will be able to see all content, all sessions are pre-recorded (which means: demos will work ;-)), and you can do it within your living room without losing any time on traveling.

Now, the committee is handling this really well: they have been calling for speakers for a while, and many people reacted. Really anyone could submit session topics to present. As I did as well (you might have figured out already I do like to do this kind of stuff  ). So how do they pick the topics/speakers? Well, anyone who registers can can vote for sessions!

So, please if you didn’t register yet: do so now, and until August 1st (that’s not far out!), you can help the committee pick the topics most people want to see during the conference. The most votes will be picked! I’m not going to advertise my sessions – just pick based on topics. That makes most sense!

Some highlights on the conference:
– It’s free
– It’s virtual
– It’s not just for Business Central. These are the tracks:
○ Dynamics 365 Power Platform
○ Dynamics 365 Finance & Operations
○ Dynamics 365 Customer Engagement
○ Dynamics 365 Business Central
– There will be Q&A panels during the conference
Recorded sessions which will end up on YouTube!

Date
September 9-10

Using DevOps Agent for prepping your Docker Images

$
0
0

I have yet another option for you that might be interesting for you to handle the artifacts that Microsoft (Freddy) is providing in stead of actual Docker Images on a Docker Registry.

What changed?

Well, this shouldn’t be new to you anymore. You must have read the numerous blogposts from Freddy announcing a new way of working with Docker. No? You can find everything on his blog.
Let me try to summarize all of this in a few sentences:
Microsoft can’t just keep providing you the Docker images like they have beein doing. With all the versions, localizations .. and mainly the countless different hosts (Windows Server, Win 10, Windows updates – in any combination) .. Microsoft simply wasn’t able to upkeep a stable and continuous way to provide all the images to us.
So – things needed to change: in stead of providing images, Microsoft is now providing “artifacts” (let’s call them “BC Installation Files”) that we can download and use to build our own Docker images. So .. long story short .. we need to build our own images.
Now, Freddy wouldn’t be Freddy if he wouldn’t make it as easy as at all possible for us. We’re all familiar with NAVContainerHelper – well, the same library has now been renamed to “BcContainerHelper”, and contains the toolset we need to build our images.

What does this mean for DevOps?

Well – lots of your Docker-related pipelines were probably going to download an image, and using that image to build a container. In this case, you’ll not download an image, but simply see if it already exists. And if not, build an image, and afterwards build a container from it, which you can use for your build pipeline in DevOps.
Now, while BCContainerHelper has a built in caching-mechanisme in the “New-BCContainer” cmdlet .. I was trying to find a way to have stable build-timings .. together with “not having to build an image during a build of AL code”. And there is a simple solution for that…

Schedule a build pipeline to build your Docker Images at night

Simple, isn’t it :-). There are only a few steps to take into account:

  1. Build a yaml that will build all images you need
  2. Create a new Build pipeline based on that yaml
  3. Schedule it every night (or every week – whatever works for you)

As an example, I built this in a public DevOps project where I have a few DevOps examples. Here are some links:

The yaml

Obviously, the yaml is the main component here. And you’ll see that I made it as readable as possible:

name: Build Docker Images

pool: WaldoHetzner

variables:
  - group: Secrets
  - name: DockerImageName.current
    value: bccurrent
  - name: DockerImageName.insider
    value: bcinsider
  - name: DockerImageSpecificVersion
    value: '16.4'
  - name: DockerArtifactRetentionDays
    value: 7

steps:
# Update BcContainerHelper
- task: PowerShell@2
  displayName: Install/Update BcContainerHelper
  inputs:
    targetType: 'inline'
    script: |
      [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor [System.Net.SecurityProtocolType]::Tls12
      install-module BcContainerHelper -verbose -force
      Import-Module bccontainerhelper

- task: PowerShell@2
  displayName: Flush Artifact Cache
  inputs:
    targetType: 'inline'
    script: |
      Flush-ContainerHelperCache -cache bcartifacts -keepDays $(DockerArtifactRetentionDays)

# W1
- task: PowerShell@2
  displayName: Creating W1 Image
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-BCArtifactUrl
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.current):w1-latest

# Belgium specific version
- task: PowerShell@2
  displayName: Creating BE Image
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-BCArtifactUrl -country be -version $(DockerImageSpecificVersion)
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.current):be-$(DockerImageSpecificVersion)

# Belgium latest
- task: PowerShell@2
  displayName: Creating BE Image
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-BCArtifactUrl -country be
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.current):be-latest

# Belgium - Insider Next Minor
- task: PowerShell@2
  displayName: Creating BE Image (insider)
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-BCArtifactUrl -country be -select SecondToLastMajor -storageAccount bcinsider -sasToken "$(bc.insider.sasToken)"
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.insider):be-nextminor

# Belgium - Insider Next Major
- task: PowerShell@2
  displayName: Creating BE Image (insider)
  inputs:
    targetType: 'inline'
    script: |
      $artifactUrl = Get-bcartifacturl -country be -select Latest -storageAccount bcinsider -sasToken "$(bc.insider.sasToken)"
      New-BcImage -artifactUrl $artifactUrl -imageName $(DockerImageName.insider):be-nextmajor

# Images
- task: PowerShell@2
  displayName: Docker Images Info
  inputs:
    targetType: 'inline'
    script: |
      docker images 

Some words of explanation:

  • Pool: this defines that pool where I will execute it. I know this will be executed on one DevOps build agent. This is important: as such, if you have multiple agents in a pool, you actually need to make sure this yaml is executed on all agents (because you might need the Docker images on all agents). Yet, this indicates that when you work with multiple build agents, this might not be the best approach.. .
  • Variables: I used a variable group here to share the sasToken as a secret variable over (possibly) multiple pipelines. The rest of the variables are quite straight forward: I’m not using the “automatic” naming convention from the BcContainerHelper, but I’m using my own. Not really a reason for doing that – it just makes a bit more sense for me ;-).
  • Steps: I’ll first install (of upgrade) the BcContainerHelper on my DevOps Agent and flush the artifact cache if too old (7 days retention). Next, I’m simply using the BcContainerHelper to create all images that I will need for my configured pipelines. You see that I have an example for:
    • A specific version
    • A latest current release version
    • A next minor (the next CU update)
    • A next major (the next major version of BC)

Schedule the pipeline to run regularly

Creating a pipeline based on a yaml is easy – but scheduling is quite tricky. Now, there might be a better way to do it – but this is how I have been doing it for years now:

1 – When you edit the pipeline, you can click the tree dots on the top right corner, and click “Triggers”

In the Triggers-tab, override and disable CI (you don’t want to run this pipeline every time a commit is being pushed)

Then, set up a schedule that suits you to run this pipeline, like:

And that’s it!

ALOps

If you’re an ALOps user, this would be a way to use the artifacts today. Simply build your images, and use it with the ALOps steps as you’re used to.

We ARE trying to up the game a bit, and also make it possible to do this inline in the build (in the most convenient way imaginable), because we see it necessary for people that are using a variety of build agents, which simply can’t be scheduled (as they are part of a pool). More about that soon!

Using Microsoft Dynamics 365 Business Central Artifacts to get to the source code of the default apps

$
0
0

A question I get a lot – especially from people that come from C/AL, and only take their first steps into AL – is: How do I get to Microsoft’s source code of the BaseApp (and other)?
Well, there are multiple ways, really. You can download symbols, and unpack the symbols. You can download the DVD and get to the code on the DVD, or…

You can simply download the artifacts

And with “the artifacts”, I mean the artifacts that are used to build your Docker images.
If you’re already building your Docker containers based on the artifacts, you probably already have them on your system! If not, you can still make them available, even without having to use Docker! Let’s see how that goes..
You might have heard about the command “Get-BCArtifactUrl” CmdLet that I pullrequested to the BcContainerHelper. What I tried to achieve is some kind of easier way to get to any version of Business Central: by enlisting all possibilities, and by giving a somewhat easier way to filter them. After many improvements from Freddy, now you have a way to easily get to the url of any BC Artifact that Microsoft makes available.
The module also contains a way to download the artifacts with the “Download-Artifacts” CmdLet. So – you can easily get to the url, and you have a cmdlet to download – let’s do that! (if you haven’t got BcContainerHelper, get it first!):

Download-Artifacts -artifactUrl (Get-BCArtifactUrl) -includePlatform  

It will download the Artifacts to the folder ” C:\bcartifacts.cache” by default (unless you set up another path). In that folder, you’ll find all AL sources. A few examples:

The AL Base App: C:\bcartifacts.cache\sandbox\*version*\platform\Applications\BaseApp\Source

The Test-Apps: C:\bcartifacts.cache\sandbox\*version*\platform\Applications\BaseApp\Test

I always work with the Docker containers, so automatically, I have the sources of the exact versions of BC on my own machine whenever I need it. But if you’re not working with it, or you work with a centralized docker system (so you don’t have anything local) .. now you know an alternative way to get to the sources ;-).

Use Azure KeyVault in AzureDevops for sharing licensing and other secrets

$
0
0

You are probably aware on how “secrets” work in AzureDevops. In a way, it’s simple: you can create variables, and store the value of the variable as a secret or not, simply by tapping the “lock” when creating a variable.

To share variables over multiple repos, you can create a variable group, and use that variable group in multiple pipelines.

Quite Easy! But …

Thing is – out-of-the-box variable-definition in DevOps – as far as I know – is “just” on project-level. We can define variables on a pipeline, we can pass to templates, we can create “global” variables and such … but sometimes, you need to be able to share a (secret) value, like license-key, over about all your projects. Or even across multiple DevOps organizations – however you chose to set it up.
Many partners have 1 DEV license key that expires every 90 days, so you might want to be able to share that license key over all your projects. The goal is: when you have a new key, there is just one place to change, and all your pipelines will keep running.

How do I share Secret variables over multiple projects?

Let me share you a simple way to do that, but first a disclaimer: it could very well be that I’m not aware of a built-in DevOps option to do this. Please let me know in the comments if that’s the case.

Step 1: Set up an Azure Key Vault in the Azure Portal

In Azure (yes, you’ll need access to the Azure Portal), you have “Azure Key Vault”.

Just create a new Key Vault:

Step 2: Create Secrets

Once you created your vault, you can simply navigate to it..

And start to create secrets:

As you can see, it’s simple: just a key/value pair basically:

The result is simply a list of secrets that you have now at your disposal.

To continue, let’s go back to DevOps…

Step 3: Create a variable group

As you might already know, variable groups can be linked to secrets in an Azure Key Vault. Since these are all secrets that we want to manage on a “high level”, it makes sense to take the highest level we can to manage variables in DevOps, and that’s: Variable Groups.

Step 4: Link it with Azure Keyvault

Make sure you link it with your Azure Key Vault (and Autorize the subscription, and the vault if necessary).

Done forget to add all secrets you want to make available in this project. By default, none of the secrets will be linked, you need “Add” them yourself!

Save, and done! Now, you will be able to …

Step 5: Use it in your pipelines

Here are a few examples on how to link it in your pipelines

And use it:

Do know, when running the pipeline, you might have to give access for this service connection. Simply permit it and run it – you need to do this only once.

If you ever want to delete/disable access to this subscription, do know it has basically created a service connection, which you can find in the project settings:

Just after I wrote this post, I happened to find this one: https://zimmergren.net/using-azure-key-vault-secrets-from-azure-devops-pipeline/ . Definitely worth a read, as it drills a bit more into the security considerations.. .

You wonder how? The answer is “DevOps”!

$
0
0

You must have seen this blog post from Microsoft: Maintain AppSource apps and per-tenant extensions in Dynamics 365 Business Central (online)

And/or this blog of AJ: Business Central App maintenance policy tightened

And/or this post from James: SaaS enables progress, why block it?

If you didn’t, please do. Because as a Microsoft Dynamics 365 Business Central partner, your job does not end with “being able to implement some customizations at a customer”. No . When you create apps – these apps will live at a customer that most probably will have continuous updates. And to quote Microsoft:

“It is your responsibility for continuously aligning the added code to Microsoft’s release rhythm that is expected for all Business Central online customers”.

Kurt Juvyns – Microsoft

Make no mistake! Don’t believe in the fairy tales of some people that Microsoft will/should make sure your code will work forever. No, it’s your code, your responsibility. Like phone-manufacturers change OS and Sizes, the apps on it need to either follow or be abandoned.

Microsoft refers to a page on docs: Maintain AppSource Apps and Per-Tenant Extensions in Business Central Online, with resources like release plans, access to pre-release, deprecation information, training and coaching information, … . And – a very clear warning:

If publishers lack to keep their code updatable, they risk that ultimately their apps or PTEs will be removed from the customers tenant, and this will most likely result in important data not being captured as it should. For apps, this also means removal from the marketplace.

Microsoft (Docs)

The article also explains that they will do what they can to inform the right parties when a problem is to be expected. I don’t want to put it here in the blogpost what they do and how frequently .. just check it out on Microsoft Docs– because that will be maintained.

Reading through the article, I was like .. uhm … hello .. didn’t you forget a chapter? Shouldn’t you include a “how to do this?” or “Best Practices” kind of chapter? I was actually quite disappointed it didn’t mention one single word about the “how”. Well .. if they would have .. I would have at least found the word …

DevOps

Let it be no surprise that whatever Microsoft just “announced” isn’t really a surprise. It’s rather a “confirmation” than it is an “announcement”. But putting something in a contract is one thing. Dealing with it on an operational level is another! And what I (and not only me ..) have been screaming from the rooftops for the last so-many months – no – years, is exactly that: “DevOps is going to be key”!
Honestly, since 3 years now, I have not seen any way around DevOps. Questions like:

  • How will we work in teams?
  • How can we contribute to the same codebase .. and keep that codebase stable?
  • How will I be notified when my code won’t work against the next version – minor or major?
  • How will I deploy all these dependent apps?
  • How will I keep track of dependencies?
  • How will I maintain “breaking changes”?
  • How can I prepare myself against a next version, so that when it’s release, I have my own app ready that same day?

All these, and many more challenges, have had one simple answer: Microsoft Azure DevOps. And it’s time every single Microsoft Dynamics 365 Business Central partner is not only asking himself those questions – but also start to take action in answering them for their company and dealing with it. I can see only one reason why Microsoft is writing the article above .. and is because they notice that (some) partners do NOT take up that responsibility.. .

I’m serious. We, as a community, make the name of Business Central. If we f..mess up, we mess up the “Business Central” name. It’s as simple as that. Customers will not say “that app sucks” or “the partner sucks” .. customers will say “Business Central sucks”. And it doesn’t. Business Central rules! Or as Steve Endow would say: Business Cenrtral Is Amazing! It makes all the sense in the world that we do all we can to be as good as we can.

Starting with DevOps

The general Business Central partner might not be familiar with DevOps – we didn’t really use it with C/SIDE, did we? It’s going to take an effort. Sure. So let me give you a few resources, besides the many coaching-possibilities that Microsoft has in their “Ready To Go” program.
I really liked this book: DevOps for Dummiesfrom Emily Freeman. And today, I just learned that BC MVP Stefano Demiliani also wrote a book on DevOps. I have no idea if it’s good – but I can’t imagine it isn’t ;-). I’m buying for sure!
If you look more to AL, there are people in the community that can definitely help you. I know we have Soren, Kamil, Gunnarand Lucthat have been advocating “DevOps” and “Build Pipelines” for a long time. Just watch their videos on youtube from NAVTechDays. You have blogs from Michael Megel, and Tobias Fenster is also diving into making DevOps much more approachable for all partners!
Then you have me :-). I have been advocating DevOps so much the past couple of years. So. Much. And I’m still doing that by an occasional virtual training(thanks to COVID-19) and sessions on (virtual) conferences. A few years ago, I had a session on Directions US, and I had many requests like: hey, can you please make your software available to us .. which even resulted in tooling that you can use now, right within DevOps. But, that’s not the only tooling you can use – also Freddymaintains the BcContainerHelper, which can be used to create and maintain your apps in DevOp as well! Just follow his blog here.

Conclusion

So, about this article from Microsoft, if you have any questions about the “how” – just answer them with “DevOps” ;-). There is absolutely not a single reason that any partner that creates any kind of apps for Microsoft Dynamics 365 Business Central .. to not try to make its life as easy as at all possible. And a good start there, is “Microsoft Azure DevOps”. But that’s just my opinion ;-).

Viewing all 336 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>