The whole IT infrastructure
Modernizing an IT infrastructure and it’s foundations always starts by going through the existing IT infrastructure. This also means that going through the existing business workflows is a necessity.
Without going through these first, the whole IT infrastructure modernization is only going to be akin to a moving service. Moving the data from one office to the next.
The new office might have a snazzy new coffee machine (MS Teams chat) and a lot of team-specific rooms for collaboration (Teams groups / Teams for specific Ops groups) that are all nice to have. However, if they are not utilized in their fullest extent (Teams / SharePoint Automation and Teams apps), the benefit of these for the company will be rather lackluster.
What we’re trying to illustrate is, that analyzing and improving the existing workflows and work patterns at least during each “office migration” is suggested. Preferably this should be done yearly, especially with the rapidly evolving nature of IT infrastructure.
When this is neglected, any “temporary” solutions to workflows that are meant to be just that, become what they in most cases become. Permanent workflow solutions that are inefficient.
Below are a few examples of such inefficiencies.
Simple (and a terrible) example of this is, for instance, that all of the passwords that are used for general corporate accounts are stored in a shared excel file. Said file is on the network share, and either protected by a simple password or isn’t protected at all.
In the first example, when someone opens or changes the file, other people cannot open it. And if it’s been changed or updated, some users might have an old version tucked away in some other local location on the computer which they use, and the change is not propagated to that file. Or worse, the shared file is then promptly deleted and all passwords are lost forever (no backups available).
Suffice to say, this is a huge security risk.
Simple and vastly superior solution to this is to have a password app that doubles as MFA software as well. This software can save both company / personal passwords and personal MFA codes. The software also works on mobile devices and web browsers, so mandating that MFA is used on all logins is then a feasible option. Without solution like this, it’s nigh impossible to get this kind of change to pass through ops and management. This way both the passwords and MFA codes are quickly available for just the needed users and groups, and the process of authentication is vastly improved over to the old model or compared to using only separate MFA app in phone or just text messages.
Second example is related to verifying and confirming documents that need it. In this example there’s one person that is responsible for verifying and confirming that regulatory files are valid for customer delivery. Only one person in the organization has been trained to do this. So when they are either on sick leave or on holiday, any validation of files gets stuck. Improving such a bottleneck is easy by just training additional personnel to do this task.
Third example is a general huge network share, that houses all of the company files. This has been the go-to option for ages due to the cost of having streamlined network shares. The cost has been very prohibitive in the past, but f.ex cloud file shares are now far cheaper than they used to be.
By splitting this huge file share to smaller Team-specific drives will make searching for data and collaborating within the team much faster. Also, since they amount of data being processed by a single drive is smaller, the search speeds would be vastly superior. And when new data from a customer or from within the company comes in, it could be automatically siloed to the correct team and drive, instead of just using one huge location for all data.
In all of these examples, the existing workflow is vastly improved.
Even if all of the IT infrastructure is modernized multiple times and to multiple different locations, it won’t matter at all. Not when modernization of the workflows and underlying foundational structures is neglected.
Modernization of the workflows and foundations is a crucial part of any IT infrastructure modernization. Going through the root causes that cause these inefficiencies for the company and analyzing the foundation of the IT infrastructure and the workflows is thus essential.
But lets move on!
Legacy IT Overview
So how are we gonna wrap all of the topics we’ve discussed in the earlier articles together and produce a coherent, modern IT infrastructure?
Firstly, the whole existing IT environment should be split into a few general yet critical sections:
- IT devices / access devices (computers, servers, mobile devices, web browsers)
- Business Critical workflows, services and tools (CAD, SQL, etc.)
- Company data, data management and access rights to said data
- Office and potential office-centric data (office network, printers)
- User accounts
This is just a crude example, but in general these services are enough to cover all of the pieces of company IT infrastructure.
When all of these have been reviewed and a plan for modernization has been enacted, the actual process of IT infrastructure modernization can begin.
Below is a general overview of a hybrid IT infrastructure, which was a modern setup a few years ago.
This kind of an Active Directory based (Hybrid) IT infrastructure is used as baseline in most of the IT infrastructures that are currently running in companies. In fact, it’s in use in over 90% of the Fortune 1000 companies.
Some notes about the image.
Data on the gray background is data located either in the company’s office network or in a local data center. Only Azure / O365 is in the Microsoft’s Cloud (pictured bottom right), and everything else on the white background depicts the Internet and the data that travels over it.
Section 1. describes a possible enterprise application login portal. This can be either a self-hosted portal, or just a login portal provided by a third party (f.ex. a Citrix Cloud type portal). However, the portal requires appropriate accounts, either a guest (external) or company (internal) account.
Public DNS should also be mentioned here, which is mandatory for any IT infrastructure. It’s the primary means for controlling the public namespace, redirecting and validating the Internet traffic to the correct company domain and resources.
Section 2. describes the company’s internal office network. This is either a Hosted Private data center network (with an S2S VPN connection from the office + VPN Gateway) or just a typical office network that users connect to via VPN Gateway.
Section 3. describes the Traditional Active Directory implementation, which contains least two Domain Controllers. Implemented either on a server in the office or in a Hosted Private Data center environment. Can be either hybrid with AAD Connect, or standalone AD.
Section 4. Describes database servers, which are quite common in a slightly more complex IT infrastructure. They are used for critical workload apps that require a dedicated database server to work as intended. The same database server(s) can also be used f.ex for an SCCM server as well as a SCOM server. Or if implemented correctly, they are differentiated for their own servers / services.
Section 5. describes the actual business application that is logged in through the Portal in Section 1. This could be f.ex the company’s own e-commerce application or a website through which it can be accessed. Usually hosted in either on-prem or in the hosted private cloud via server(s).
Section 6. describes Azure Active Directory Connect service. This service synchronizes the Office’s Active Directory user accounts to the public cloud, in this case Azure Active Directory. Synchronization is always based on the on-prem Active Directory. This is not essential service, but makes management far easier in legacy environments with some risks.
Section 7. describes the Gateway server for the VPN connection. This is either in the Hosted Private data center, or on a server in the office network. This connection is a bottleneck when most of the company is working from a home office. Especially if it’s configured as Full Tunnel VPN (which is the only real option in the hybrid model).
The picture above further illustrates this bottleneck, as well as the bottleneck caused by point 2.
To explain a bit: You are first connected to the office network via a VPN connection over internet. Only from there you are only connected to general internet and O365 services. So it’s like going to a restaurant to get tap water insteady of getting it from your home tap. And 50 other people are getting tap water with you at the same time, instead of one at home.
The speed is always limited to exactly the maximum the office network and VPN connection provides. It should also be noted that the same limitation of a VPN connection becomes apparent when only the office network is connected to the data center. In this case the hosted data center is then the outpoint for all network traffic. And from here the network is connected to O365’s systems. In this case it’s an S2S VPN tunnel from an office network to a data center office network. These limit the connection speed to 100/200 MBs due to the encryption of the VPN connection. L2 Network connection can also be considered in this case, but this often requires expensive deals with the local ISP, and may not be encrypted.
Section 8. describes the user accounts that are used to log in to the @company.com domain. The external accounts that log in directly to online services via internet or use the online store on the company’s website are also included in this.
Section 9. describes both the commonly used Network Drives or Network Shares and other possible standalone enterprise applications, as well as the additional servers required for critical workloads. These may include f.ex. document management servers (DMS), SCCM and SCOM servers, and customer relationship management (CRM) systems. Or a CAD system used for 3D modeling. These vary significantly from company to company, and can be a very large and important part of the business infrastructure, or just the company-wide bog standard network drive for sharing company files.
Section 10. describes devices that are not controlled in any way by the company, but still have access to the company’s O365 resources. The most common example of this is a personal smartphone with corporate email and calendar synchronized to it. Other alternative would be a home computer that is used to view O365 or Teams data from time to time. Or any web browser (Chrome, Firefox, Safari, etc.) that can be used to log in to the O365 portal for Web versions of all Office apps.
These individual, unmanaged locations and access devices are the single LARGEST risk factor for a hybrid company. If a company allows the use of such access devices and access to O365 resources without any oversight, or even directly to company resources, then do know that…
All of your business critical company data has already been leaked to a third, malicious party.
When these access devices are not protected or controlled in any way, it is right to assume outright that all data under the rights of these devices has been accessed, viewed and copied by a malicious party. This also includes any web browsers (Safari, Firefox, Chrome, Firefox, Chrome, Firefox, Chrome) that have access to O365 data.
The reason for this is that you cannot feasibly know if the company data has not been breached or accessed. The company has no way to monitor or check if the access devices have been contaminated or not, since there is no monitoring in place on them. Which automatically leads to the worst possible outcome.
You can still of course assume that this cannot happen to our company. Or that no one is interested in our data. This is not the case, as all available data is of interest to criminals.
Therefore, these locations must be included in the scope of modernization under management and access rights.
Alternatively, you can manage the access to O365 enterprise resources to prevent access from unmanaged devices and locations. However, this will work far better in a modernized environment. In hybrid or on-prem environments you still have to rely on Active Directory and local office network + Data Center network to be up and running to manage these restrictions. In the hybrid model, the data connection between the office and the data center is always a bottleneck.
You’ve probably heard that one does not simply modernize an IT foundation.
So, how would one then modernize an IT infrastructure foundations in its entirety?
Towards Modern IT infrastructure
All of the subjects mentioned above are easily modernized.
How then? Let’s plop out a picture straight away.
The Modernized IT foundation
Bit of an overview first. The information on the gray background is in the same place, in this case in Azure AD / O365. There is nothing (nada, zilch) in the local data center, and the local office only has an individual office network. The base white background depicts the Internet and the data passing over it. There are a few other pics that open this a bit more below.
Section 1. Is changed so that there is only one login portal / system, i.e Azure + O365 based login. This allows all authentication to be monitored under a single system, and to pass it all through Azure’s comprehensive auditing and logging. And also all cloud applications (AWS, Google, Facebook, YouTube, etc.) are included in this monitoring and management via Cloud App Security. Thus, tracking enterprise data, mobility and access of said data is at a significantly better level than in a hybrid environment. In a hybrid environment, putting all authentication under the AD base of a hybrid environment is, plainly speaking, very hard.
Azure, on the other hand, comprehensively integrates all modern authentication services through direct, built-in and documented Graph API implementations. This means that integrating Azure AD based accounts into other systems takes significantly less time and money than in hybrid environments.
Section 2. provides the most significant change in the environment. Essentially this means that the VPN connection and the office network become unnecessary for critical business operations. How is this possible and not a security issue? The answer to this is the fact that all data is protected with SSL / TLS encryption. In other words, all traffic in the office network is already directly protected and encrypted. Thus, there is no need for a separate VPN implementation through the data center or office network. These two always become a bottleneck for all future business operations.
This change and redundancy of office network means that speed of the company’s network is also the number of employees x 100 MBs (average speed of the home network). Also, the potentially remaining office network can be significantly streamlined, and the Internal DNS / DHCP configuration becomes significantly less cumbersome to implement and maintain. Normally, both of these require their own servers and thorough maintenance, but in this case, both are easily taken care of through the network’s firewall / routing device. However, in larger office networks (+ 300 full-time office users), there may still be a clear need for dedicated pfSense-type firewall rack server(s). However, in 90% of modernization cases this change does not require additional local firewall servers.
The image below further “shows” that each of its own Azure services here has a direct, encrypted, secure and speedy connection to the Internet, through which the connection is still secured up to a managed, monitored and automatically updated terminal. And also, all users from their home networks have their full bandwidth available for these services, unlike with VPN solution.
So this update offers all the benefits of a VPN connection (security) without the limitations and requirements (heavy infrastructure) it imposes. Also, firewall service and load balancing are built into Azure’s Virtual Network and Azure Virtual Desktop solutions. This means that these are protected from malicious access and DDOS by default.
Section 3. is one of the more interesting parts in the modernization. It’s the Azure Active Directory Domain Services (AADDS). A service that was deemed unnecessary a few years ago. However, Azure Virtual Desktop has changed this significantly. Combining these two services will modernize any on-prem, business-critical application based on Legacy AD authentication into a scalable Azure-based solutions. AADDS provides automatically maintained and managed Domain Controllers as well as its own subdomain under the Azure tenant. This allows legacy AD authentication and GPO to be implemented efficiently and with very little maintenance for business critical applications that still require it. Often these are the most critical apps in business infrastructure, and modernizing them is the biggest issue in a potential modernization projects. However, AADDS combined with Azure Virtual Desktop now offers a route for straight modernization.
Section 4. deals specifically with the fact that all SQL server structures for in-house device application management and monitoring can be eliminated from modernized production environment. This is due to the whole process of application management moving to Intune, which does not require SQL servers. So this existing capacity can be downsized significantly or changed to boost new business critical workloads on Azure. Though ideally, all critical production applications based on AD authentication should also be modernized to the application provider’s cloud-based variants of the software (SaaS). However, this is not always possible. In this case, AADDS combined with Azure Virtual Desktop is a remarkably good choice.
Section 5. focuses on the virtualization of web applications. They can be modernized, managed and secured via the Azure App Proxy service. An App Proxy is implemented on the newly created Azure Application server, which then handles access control and authorization of potential web applications that are connected to it. Via this Proxy, HTTP-based, legacy AD requiring applications can be accessed securely over the Internet in the modernized environment. And authentication would work directly with O365 accounts with RBAC roles and groups.
Section 6. describes precisely one of the most important parts of the AADDS deployment. In a standard AD Connect configuration (hosted / on-prem), Active Directory always has an anchor value. However, this is not the case with the Azure AD DS variant of AD Connect. It always uses the Azure Active Directory user account as the anchor value, so there is minimal risk of possible e-mail or other issues with the accounts. F.ex a possible AD Connect synchronization error does not delete the entire company user account listing, which temporarily deletes ALL related mailboxes from O365. This causes a interruption in e-mail traffic for all synced company mailboxes until the issue would be fixed by IT.
Hence why Reverse AD Sync is very valuable part of AADDS, and it’s not really brought up at all unless you know about this. This functionality cannot be accomplished even if the Active Directory server configuration is done on Azure as separate servers, which is also a one form of hybrid solution. Reverse AD Sync status can only be achieved through Azure Active Directory Domain Services.
The general benefits of Azure Virtual Desktop are shown in the picture below:
Section 7. highlights the fact that VPN Connection is no longer required.
Split Tunnel VPN is useful in one particular situation though: with SMB port (445) rerouting and an Azure Files Modern Network Drive. This is due to the fact that internet connection providers (ISPs such as Elisa, Telia or DNA) continue to block all SMB port traffic by default due to their old policies (or to protect their own hardware). In this situation, a Private Endpoint implementation can circumvent this. It reroutes all Azure Files SMB traffic on an user’s managed computer over an OpenVPN based P2S Split Tunnel VPN connection. The benefit of this is that only Azure Files data is routed over the Split Tunnel VPN connection. So in this case the connection does not affect other O365 and Azure services. Also, it’s only required if the ISP is blocking traffic on SMB port (port 445) on the network. There are also a few different ways to unblock it by contacting your local network provider.
The reason for this ISP blocking is the (in)security of SMB 2.1 and SMB 1.0 -based data transfers. These are inherently insecure to use over internet. However, Azure Files forces the use of the secure SMB 3.0+ protocol when transferring or processing data over the Internet. Since the ISP blocking is only done on the port level, this has to be bypassed in some way. Other than that edge case, a VPN connection is completely unnecessary in this modernized environment.
Section 8. points out that only one user account is used: the O365 / Azure AD account. The good thing about this user account is that almost all companies have some sort of Microsoft tenant account configured. This is an account that is automatically created f.ex with the use of O365 e-mail service. Azure AD is used for that user account as a baseline. Due to this, the user accounts required for migration are already in place for a large number of companies, and migration & modernization is significantly less cumbersome. All O365 logins and login situations can also be managed and tracked directly through Azure’s highly comprehensive auditing and monitoring. F.ex, potential login attempts from abroad can be easily detected and prevented. This means that Zero Trust model is automatically used in an modernized environment with proper settings.
Section 9. details modernization of production data storage and other services (f.ex printers), which also always raises a lot of questions. “Then how are files and file servers managed in a modern environment?”. There are indeed a few options for this. The best course of action is always to first evaluate all the necessary data and the associated workflows that are part of the data and modernization.
Analyzing the workflows takes time though, which is time used up in “useless” things instead of focusing on the modernization project itself. This is why all data is often just quickly thrown to the cloud haphazardly. Our recommendation is always to assess, at least superficially, how existing data structures and ways of processing data could be improved. Even slightly. It also bears fruit to analyze how they could be improved in the future, after the modernization.
In this case at least a plan is ready after the modernization, and the workflows do not continue to stagnate without improvement.
But back to IT. There are two good options for changing the file structure: Transferring everything directly to Teams group-specific teams (SharePoint in the background) or transferring part of the structure to Teams and part of it to Azure Files. We do not recommend Azure Files as a standalone, only replacement to Network Shares. That is not direct modernization, just a stopgap till the proper data modernization.
Azure Files should be used if one large glut of enterprise data has to be directly mapped for users (more than 300,000 files / folders in the data structure). Because Teams / SharePoint local synchronization is based on OneDrive synchronization, compromises must be made with very large numbers of files and folders. Larger than 300,000 file and folder structures do work well through Teams and SharePoint, but synchronizing the entire file metadata of such a data structure causes potential user computer lockups. The background system is not affected, but a single user computer hangs in this situation. This is the reason why data flows and structures need to be assessed and modernized and the need for mapping has to be carefully planned.
If the above assessment and workflow modernization is not possible, then Azure Files provides the best and most functional option for managing enterprise data in the modernized environment. There are some additional limitations, which may require the data to be split and refined a bit more before migration. This is due to the flat access rights structure of locally mapped Azure Files drives.
Another common “problem” that pops up is the print server. How can we survive without organizing and maintaining one? Well, quite easily. It is simply no longer needed in today’s office environment.
The true solution for smaller printing needs is a Direct IP printing + PowerShell script based printer configuration and for larger needs, a Printix-type cloud printing solution. Both options completely replace the need for a print server.
Section 10. highlights the single most important aspect of the modernization. Access to all work resources by access devices outside the company’s control is blocked. This is usually referred to as a Zero Trust situation, where each login is insecure by default and verified by certain conditions (Conditional Access + Azure AD). This only allows you to log in and use company resources if certain conditions are met. This change flat out prevents a lot of the possible situations in which uncontrolled access devices (browsers, computers, mobile devices) could be used to access company resources.
This also means that every access device from which company data is processed, is under the control of the company. After EMPLOYEES, the most important thing for a company is it’s DATA. This security model eliminates the potential risk of human error. It also protects the employees of the company in the case of local device breach or theft.
Often this not taken into account when deploying on-prem or cloud services, and it is still based on the “Perimeter Defense” model described above. In this model, only a certain predefined “area” is protected (office / on-Prem data center network). However, devices and locations that can access either the production data system or O365 emails and Teams are not taken into account at all or well enough.
Few examples of said access:
- When web-based e-mail data processing is allowed (OWA)
- Business phones are not monitored in any way
- Mobile or Web browser-based e-mail processing is allowed with the use of MFA from any access device.
The most important security change in this whole modernization project is to prevent unmanaged access devices from gaining direct access to company resources. Same results can be achieved in a hybrid or on-prem environment, but that is both laborious and expensive to maintain and produce. The most common method of allowed external access is to allow it under certain conditions (e.g. MFA requirement). This kind of access control means that certain conditions are met, but there is nothing to prevent the access device with MFA token access from being already contaminated.
Thus it would still be sending data to a third, malicious party. This is the worst situation imaginable, since the company data would be monitored, scanned and exported by a malicious party without the company noticing anything.
A quick example of this:
- Screenshots of the unmanaged device or browser regarding the open company documents (keyword / service based) or the entire display screen are taken in the background
- These are then sent via the access device / browser to a third party over the Internet (and encrypted / secured to prevent monitoring from picking this up).
- These screenshots are analyzed automatically with OCR or other means.
- All the processed company document data has been leaked to a third, criminal party.
- And the company that was the target of this knows nothing about this
- This is due to the fact that there is no monitoring or blocking for it on an unmanaged access device or browser.
In other words, simply requiring MFA is by no means a sufficient solution. This above example is one way to circumvent the MFA.
This means that cryptolockers and other hackers are not the biggest threat. They are just the most visible results of security breach that has already been realized. The more advanced parties before the hackers have exported and analyzed the company data they need. And are still doing this for as long as the unmanaged access is allowed.
Aftermath
This example IT infrastructure foundation and modernization overview above is a relatively usual company infrastructure. All companies have their own IT infrastructures and environments, which differ slightly from this. In most cases with critical business applications. For those applications, the following recommendation always applies:
>>IF POSSIBLE, MODERNIZE YOUR CRITICAL BUSINESS APPLICATIONS TO THE CLOUD PaaS / SaaS SOLUTION OF THE APPLICATION PROVIDER!<<
This in itself means that at least the existing service structures will be evaluated and potentially updated after the evaluation.
The evaluation can reveal that an old and “critical” service can in fact be FULLY replaced by a modern solution offering. For example, Teams and SharePoint can replace any Intranet solution that is self-managed. Said Intranet can then be modernized as learning platform with Microsoft Viva and automated with both Microsoft Flow and Power Automate. And this is just a simple example.
However, there are situations where some critical business applications cannot be modernized in this way (f.ex some of their dependencies require legacy AD). In these cases, the modernized IT infrastructure example above is the best and most sustainable solution for the future of the company. This is due to the fact that it takes into account all of the following aspects:
- IT equipment baseline (computers, servers, mobile devices)
- Business critical workflows, services and solutions
- Company data, the management of the data and the access rights of data
- Office + potential data residing there (local office network, printers)
- User accounts
Let’s get to the specifics:
IT equipment baseline is changed completely, but only on an app / solution level. To reiterate, every location, device or service that processes company data is secured and brought under management. This does not require any additional investments from the company on hardware, since it is part of the appropriate M365 license package. Also leveraging Azure Arc in very large on-premise cases might be warranted, but for this example, this is not required.
Usually all of this is outsourced, simply because of the complexity of application and device management. With the modernization, this complexity is erased and either the company itself or any relevant managed service provider (msp) can manage the resources easily.
This also means that the customer is not tied to one service provider, and can freely get the best service they deserve. The purpose of this is to also simplify the IT environment as much as possible and to automate as much of the IT busywork as possible. This minimizes the time employees spend on work that can be automated and leaves them free for productive and billable customer’s company assignments.
In this modern infrastructure, mobile devices also come under control and/or access restrictions. As noted above, mobile devices are often hardly taken into account well enough. Enterprises assume that no company data would be processed from them, or that it’s going to be secure anyway.
When in fact, any web browser or device that has access to email or Teams can very easily view, process and transfer company data. One important situation to note is the theft of a (mobile) device. In this situation, the company’s options are potentially very limited. Especially if the device or its access cannot be controlled remotely through (mobile) device management (MDM). Without control, the entirety of company data is leaked due to an individual phone (/device) theft.
In the modernized IT infrastructure, this is easy to both prevent and control.
Device deployments and OS + application updates are fully automated via Intune and AutoPilot instead of SCCM and MDT. These two modern deployment solutions enable updates, sharing and monitoring of all applications in the company without the need to invest heavily into other application management servers or solutions.
For example, Windows Updates no longer require a dedicated WSUS server, which would require a shitload of maintenance from IT. Windows Update for Business (WUfB) completely eradicates the need for WSUS and comes built-in with Intune.
The updates are distributed through Intune / MEM portal, which means that updates also come efficiently to devices in your home office. Since it’s all over the internet, it also means it’s completely without VPN requirements.
This also applies to third party software updates (f.ex Adobe Reader, Firefox). There are a few different options for this. If there is a need for only a few common 3rd party applications, the Chocolatey Public Repository is mightily sufficient for this. However, if the company requires a more robust offering, a dedicated Chocolatey server can be configured on the customer’s own Azure tenant, and each dedicated application package can be created through this. For even greater needs, the C4B is the best option for Chocolatey (Software | Pricing)
AutoPilot makes every delivery of company IT equipment fully automated. It is no longer necessary to format the devices separately in the office or by IT before delivery to the user. The devices are ready for company use directly from the factory. This means that they can be delivered immediately to the user’s home office. And if necessary, to the company’s office. This revelation means that a new employee will have direct access to the work environment designated for them immediately. Not next week, or after a month. Following link has clear documentation regarding the benefits of this service: Overview of Windows Autopilot | Microsoft Docs.
And the best thing? AutoPilot doesn’t require a server or cost a thing. Not a dime. It is built into the M365 BP license offering. And with Microsoft Viva, you can automate the whole employee on-boarding procedure of new workers by directing them to the company specific or required resources via Teams + Viva-> they get just the info, training and documentation they need to accomplish their designated jobs in the company.
At the same time, the security of the data is handled by Microsoft Defender for Endpoint. The Defender solution suite integrates seamlessly with the whole of Microsoft O365 / Azure ecosystem. It is also available for Apple and Android devices (and Linux in preview). Needless to say, it also works on Windows. Since this solution is directly integrated into the Microsoft ecosystem, it is markedly better than any other XDR or AV solution. There are no alternatives or needs with Defender suite. With AutoPilot and Intune, this whole system works seamlessly together.
If the system detects irregularities with data access, it prevents all device data access and wipes the device directly in the event of a potential threat. With this in mind, the threat caused in case of stolen device would be negligible. Firstly, the device is already encrypted by default and it would have mandatory biometric authentication forced via policy. And when it would hit any kind of internet access, it would be wiped clean and the documents on it would not be available without an Internet connection, since they are only accessible via MFA and the user login details. And since the documents would not be allowed to move from the company cloud due to DLP policies, even access to the data would only allow viewing said data, since it could not be moved outside the company designated cloud tenant or services.
Intune also offers the ability to integrate mobile app and device management with both Google’s “Managed Google Play Store” and “Apple Business Manager.” Through these, full control and automation of both Android and macOS / iOS devices is both easy and cheap for the company. Neither of these services cost anything with Intune, and offer SSO / Federation benefits when configured.
The infrastructure complexity will also be significantly simplified when the server base is reduced with this modernization. This makes any update or vulnerability management far more streamlined than before. When all data is already under SSL / TLS encryption, Bitlocker or mobile device encryption by default, the vulnerabilities are far less prevalent and under company control.
Next up in the list are critical services and workflows. With these, the first step in modernization is always the assessment of their necessity and the possibilities of workflow overhauls. Since these workflows are, by far, the most important sources of income for the company, it’s clear that evaluating these is of critical importance to IMPROVE THEM!
After the assessments both the company’s productivity and continuity are already better off. At the same time, the longevity of the company’s critical workflows must also be checked over. It is by no means impossible that the current operation models or practices are completely obsolete now, or will be in just few years’ time. Due to this, it’s a good idea to at least make an assessment at least every few years. When these assessments are not actively done in today’s digital world, the entire business workflow of a company may become obsolete seemingly overnight. Emerging from such a situation is highly difficult and often impossible. One such example is Nokia and mobile phones. It took Nokia 5-10 years to pivot after the failure to adopt to smartphone segment.
But if this kind of modernization of critical services and workflows is not possible, the option to use Azure AD DS as well as the Azure Virtual Desktop is very valid. With this, the basis of the company IT infrastructure is certainly based on a modern solution that will last at least 10+ years. This same solution is still compatible with older critical business software through legacy AD compatibility, and offers almost limitless scalability as the business needs grow. Cost reductions are attainable through combination licensing and infrastructure streamlining, but this should not be the main goal of modernization. Cost savings are best achieved when existing operating models are updated. If the legacy software and workflow is just lift & shifted, the costs tend to remain as is.
Another significant change will be in general data management, identity management and access rights. All company data will be basically only behind one account, the Azure Active Directory account. This greatly simplifies the number of daily logins for users, and reduces the need for a bucketful of business accounts. This alone adds significant value to day-to-day work, as accounts do not need to be configured separately for each service offering, they can just be connected to Azure AD. The accounts can also be secured and monitored by the company. In this way, all possible third-party logins can be tracked and potentially prevented.
MFA is required by default, which is the LARGEST SINGLE security update a company can make in today’s IT world. A simplifying factor here is also that each solution requires some account or authentication in order for the data to be accessed and processed. This means that any already potentially malicious situation related to the processing of company data via third-party apps can be identified and prevented. This is accomplished through the Defender for Cloud application (formerly Cloud App Security) + Label Management. For what they offer, these two combined form a supremely powerful management tool for modern file security. These are most effective together and only in a modernized environment through the Compliance licensing.
Through this comprehensive combination “Confidential” or “Highly Confidential” material and files can be monitored, and the transfer of such material can be blocked to external resources from the company’s internal resources. This simple file tracking (and blocking) erases the risk of human error and the potential risk of GDPR violation is nigh non-existent. Due to this, the general DLP policy of the company must also be comprehensively assessed. This is to determine the Data Loss Prevention scope and procedure.
Since all data access rights are also centralized under Azure AD login account, they are highly auditable. They can also be automated through both dynamic user groups, Azure Privileged Identity Management and RBAC roles. The duration of access rights can be automated and defined, so access to specific roles is eliminated when it is no longer needed for defined accounts. This means that RBAC rights can be granted on a temporary and monitored basis, which in turn can significantly reduce the risk of rights persistence. Now accounts won’t get stuck with too high RBAC privileges of access, that static configurations tend to do.
For the office and office network, this modernization is troubling but in a good way. They both become essentially unnecessary and do not limit the growth of the company anymore. Quite often even larger companies have some kind of a corporate network to which both the office and the home office needs to connect to with a VPN solution. There is a simple reason for this: to protect an otherwise unprotected connection to corporate network. A VPN solution is a good option in this case.
However, in a modernized environment things are different. The whole VPN connection is made completely redundant, which eliminates the entire office and corporate network bottleneck. While working from the office network, a VPN connection is hidden in the background from general users through VPN S2S tunnel and/or via SD-WAN/L2 configuration. Both are are significantly expensive and restrictive to maintain, and require specific talents and (ISP) services. The cost of a network connection and secured VPN connection + devices, even in a small office, is about 1000 – 2000 € / month. This estimate is only for a relatively “light” connections, which means speeds between 100-500 MB / s. For example, with higher speeds going above 1 Gbit / s, the cost will become very restrictive. This update also removes the requirement for the office network to be up and running at all times, thus cutting potential business continuity risks. Below is a clearly illustrated, traditional VPN implementation, and where the bottleneck occurs. It’s the one single network connection via the office network, that is the reason for most of the sluggishness of most, if not all remote working slowdowns. It’s called the Enterprise Last Mile.
The importance of this foundational upgrade is often significantly downplayed.
But as noted above, it frees up both the company and the employees to work from anywhere and without restrictions. Security works seamlessly through both controlled and managed devices and via SSL / TLS encryption. This means that all data is always protected by at least SSL / TLS encryption by default. In transit and at-rest. In VPN solutions, the encryption is not guaranteed when it matters the most. Example of this is that there may not be other security besides VPN on the legacy network shares and they are processed via an insecure connection on the office / data center network side, after the VPN tunneling. This means that the network itself is highly vulnerable and just getting into the network causes a potential threat to the company data.
The image below illustrates an alternative option, a Split Tunnel VPN solution. In this solution all other data except O365 data travels along the VPN tunnel. In other words, it is still a very old-fashioned and restrictive VPN connection. This only improves the speed, but still keeps the potential issues that are on the office / data center network .
Modern and correct solutions to this are the following two options:
- No VPN connection at all -> ideal situation and the true representation of a Zero Trust model. Protection through Azure App Proxy for internal web applications that need it.
- Restricted VPN connection that is only used for Azure Files routing. Required by certain ISPs due to legacy port 445 blocking.
In both situations, legacy AD has already been replaced by Azure AD. In the case of Azure App Proxy, only a specific web-based internal http application(s) in the Azure environment needs this solution. The two can also be freely combined if there is a need for both web applications and Azure Files network drives. And with modern web apps, using Azure App Service or AKS is the recommended route.
As for the existing additional office network services, they can be largely decommissioned. For example, a comprehensively secured, Perimeter defense-based network loses its significance when all data processed in the office network is already protected by SSL/TLS. In other words, even light office network security is sufficient. Since data leakage would require SSL / TLS decryption, and if those are breached, we all are in far bigger problems. To lighten the setup effectively, you can double the network device-based firewall as the office’s internal DNS and DHCP solution. This makes the office network significantly more lean and agile. And in the long run, the company can comprehensively assess the size of the office itself as well as the need for such a large office. When the office size can be significantly reduced through remote work, the rent or capital costs of the office space are hugely reduced. The last couple of years with the COVID-19 pandemic have proven that remote work is both effective and efficient for any company that does most of its billable work via the means of IT. And this modernization enables remote work in its best, future-proof and most secure form.
Configuring an office printer(s) is also easy and efficient. Default modern configuration is done on a Direct IP baseline, using the printer’s own internal print server queue as main piece of the puzzle. You see, every modern printer has a print server built-in in the form of Print Queue, which erases the need for a local print server. And if you want secure printing from your home office to your office network, you can configure the Printix cloud printing solution for remote and secure (local) printing.
And lastly, user accounts. These were briefly addressed in the general data management and other sections, but a separate overview of the user accounts themselves is needed. More often than not a company has a significant number of inactive or unused accounts in legacy or hybrid-based Active Directory environments. Assessing their access and monitoring them is often difficult due to the structure of the legacy AD in general. Legacy AD is often built as a very high wall, but with lax internal access policies. For example, a general user/computer account in AD can gain access to the general information of all the company’s employees as well as the general information of the computers. This means any user or computer account that is in a non-modernized legacy AD environment. In a modernized Azure AD environment, this is simply prevented by the fact that user synchronization and device synchronization are separated between Azure Active Directory and Intune/MEM. So by default, a computer or username cannot cross-reference information from other similar company devices and usernames that are part of the same Azure tenant. In legacy AD this is possible via the so-called “Global Catalog” server. Through this, user and device data can be called and checked via PowerShell scripts and by default, a Global Catalog Server is configured in the AD environment as the first Domain Controller. In Azure Active Directory and Intune/MEM, this is blocked by default (since there is no legacy domain to check this info), and only accounts with a specific RBAC role can request this information.
Simplifying user accounts also plays a significant role in the modernization. At some point in the life of corporate user accounts, there comes a point when there are too many accounts for users and far too many different systems. With Azure AD/ O365, all these accounts can be combined here under one account with SSO / SAML to synchronize the single login with all services connected to Azure. Nearly every business application makes it possible to integrate their login with Azure, making it significantly easier and more centralized to manage and maintain the security and lifecycle of the connected accounts. One example of this is Adobe.
The next logical step is SSO combined with Defender for Cloud (prev. Cloud App Security). This is Microsoft’s offering of the so-called CASB applications (Cloud App Security Broker). This solution monitors and evaluates to which online services (Facebook, Twitter, AWS) company user accounts can move company data. Reviewing all of these services and then excluding the unnecessary ones from data transfer is the proper way to utilize this application. It’s also directly part of the generally required Modern Azure security licensing package. Implementing this in legacy or hybrid AD is significantly expensive and difficult.
Via these measures, user accounts are always protected in addition to company data. In the end, this whole modernization project only aims to make the company’s data as secure and easily accessible as possible to the company’s employees. This makes working more efficient and the workflows and the whole IT Foundations are certainly on a future-proof and modern footing. This also makes it easy to upgrade and develop the entire company IT operations. Also, all new additional services and operations can be easily integrated into this foundation without significant labor and effort from employees.
All of this is available with very flexible licensing options for companies. Even the most common license (M365 BP) makes almost all of the services mentioned above available. And M365 E5 license replaces virtually all legacy IT infrastructure through a single user license with the modernization of the foundations. It also provides significant additional data security and data control that would not be available in the legacy hybrid environment through the Compliance side of the license. At the same time, this brings clear benefits to IT budgeting, as each employee has a clear cost estimate to be productive for the company on the IT side. Typically evaluating this cost is very hard, when the costs are distributed between all the different IT infrastructures. The modernized foundation has a clear price tag for each employee:
Required M365 license + devices + any other individual licensed applications + any other services / month.
So for example, € 20 + € 50 + Adobe CC = 90 – € 100 / month
Since these are all monthly prices, the billing will also end just as easily when an employee has to leave the company. No recurring costs from over budgeting will be incurred due to this.
For the best and most current information on each M365 license and what each service does, check the absolutely awesome Aaron Dinnage website:
As an additional note, all of this IT foundational level modernization boils down to few points:
Identity, integration, access and data management
Managing the single Azure AD identity that will access all company data that it should have access to via company-only allowed, up-to-date access devices is far more simple, elegant and modern solution than any other we’ve yet to come across. For both employees, business and operations.
Also, since all of this is in the Public Microsoft Azure Cloud, connecting and utilizing any other modern and legacy service offerings to this foundation is VERY simple, which makes the most critical aspect a walk in the park. Integrations. This way adding any new business solutions is both quick and efficient compared to integrating them to a legacy or hybrid AD environments.
So, how do we know all this? From experience, and from more than a dozen similar complete IT foundation level modernization projects completed in companies of different sizes. All of these were implemented during the COVID-19 year (2020) for our customers. We will also no longer implement any other IT foundations, as this is the only future-proof solution for a comprehensive, modern IT environment.
When you have any questions about this, we will be happy to answer them!
Since this is the only post on this page in English (as of 8th Nov. 2021), I’d like to highlight that we do offer all of our services in English as well.
Final footnote: If someone crazy got this far into this regurgitation, THANK YOU! I hope this sparks some interest in checking over your own existing IT foundations, if nothing else.