Archive for the ‘Technology’ category

Dynamics HR, and Talent, recent updates- December 2019

December 7th, 2019

Microsoft will continue investing in operational HR solutions with the erp version with a Dynamics 365 Human Resources to be released early next year on February 3, 2020.

This builds on the current core HR capabilities that are in Dynamics 365 Talent today
. It’s a branding and marketing change for core HR capabilities.
Microsoft will also be incorporating the ‘Ax’ partner adds on from Dynamics partners Four Vision, and Elevate, to further enhance the offering within leave and absence, time and attendance, and benefits administration. These new capabilities will begin rolling out within Dynamics 365 Human Resources in early 2020.

Expected updated licensing.

Microsoft recently announced, via a blog post. the decision to retire the Dynamics 365 Talent: Attract and Dynamics 365 Talent: Onboard apps on February 1, 2022, They will transition Attract and Onboard customers to a solution of their choice. This does not affect those who only use the core Talent module.

To allow time to opt-in, Dynamics 365 customers that are entitled to but are not currently using Attract or Onboard will have until February 3, 2020 to notify Microsoft that they intend to implement Attract and/or Onboard. You can opt- at any point between December 6, 2019 and February 1, 2020. If you are not currently using Attract and/or Onboard and want to opt-in to ensure service availability until February 1, 2022, submit a support ticket before 1 Feb 2020.

Meanwhile Synergy Software Systems continues to implement and support its own GCC localised HR and Payroll module built inside both Dynamics Ax 2012 and Dynamics 365 , and proven with around 50 company implementations.

Power BI update -Gateway recovery key, move to .Net framework 4 – Ask Synergy Software Systems, Dubai’s Power App specialist.

December 5th, 2019

The November update for the On-premises data gateway (version 3000.14.39) is released.

Change Gateway Recovery Key
The recovery key provided by gateway admins during installation of on-premises data gateways in a standard mode could not be changed in the past. This key is used to create the symmetric key which in turn is used for encrypting credentials in data sources/connections using that gateway. With the November release of Data Gateways, you will now be able to rotate this key. More information about recovery keys, detailed description on how to perform this change and associated limitations can be found in the data gateway docs.

November version of the mashup engine
This month’s Gateway update also includes an updated version of the Mashup Engine. This will ensure that the reports that you publish to the Power BI Service and refresh via the Gateway will go through the same query execution logic/runtime as in the latest Power BI Desktop version.

Please note that this upcoming change may impact you:
PB1 will be using .NET 4.8 framework for gateways February 2020 version or higher hence some of the operating systems it used to support may no longer be supported. i.e. for many this change will also force a Windows update.

All .NET Framework versions since .NET Framework 4 are in-place updates, so only a single 4.x version can be present on a system.
In addition, particular versions of the .NET Framework are pre-installed on some versions of the Windows operating system. This means that:
• If there’s a later 4.x version installed on the machine already, then you can’t install a previous 4.x version.
• If the OS comes pre-installed with a particular .NET Framework version, then you can’t install a previous 4.x version on the same machine.
• If you install a later version, you don’t have to first uninstall the previous version.
• The .NET Framework requires administrator privileges for installation.

Microsoft to disable Office 365′s Delve Blogs from December 2019??.

December 4th, 2019

Delve is an Office 365 capability associated with the Office Graph that’s designed to surface relevant information for end users to access Delve from the Office 365 App Launcher.The Delve Blogs feature lets Office 365 users create personal blogs.

Microsoft described its impending end in a Nov. 22 Microsoft Premier Support response letter that was published by a customer. Instead of offering a fix, Microsoft described Delve Blogs’ coming shut-off. A portion of Microsoft’s response letter, which mentioned the blog feature is to be deleted on April 17, 2020, with disablement happening earlier:

Delve Blogs to be Retired
Delve blogs are being retired. Delve blogs will no longer be available for creation, and existing blogs will eventually be removed.
Delve Blog retirement schedule:
• Beginning December 18th, 2019, tenants will not have the ability to create new Delve blogs.
• Beginning January 18th, 2020 the ability to create new posts in existing Delve blogs will be disabled.
• Beginning April 17th, 2020, existing Delve blogs will be deleted and removed from Delve profiles.

Plan to use alternative methods of blogging. We recommend creating Communication sites using News, Yammer, and Stream as a modern way of engaging with your audience. To learn more about how to setup a great blogging site, please review Creating a blog with communications sites and news posts.

Microsoft’s Delve Blogs retirement message also arrived earlier this week for administrators via the Office 365 Message Center. Microsoft’s deadlines and communication approach are “quite aggressive” for organizations, but the Delve Blogs is likely getting axed because too few organizations use the feature.

So far, there’s hasn’t been any apparent public communication from Microsoft that Delve itself will be going away. Communications Sites in SharePoint is a possible substitute for Delve Blogs but not for all organizations because “not all users can create sites for themselves.” To address the coming deletion of Delve Blogs, end users should start saving their blogs as document files..

Hyper-automation with UiPath and Synergy Software Systems

November 26th, 2019

Hyperautomation? Gartner just named hyperautomation as its #1 strategic technology trend for 2020 (from their recently published “Top 10 Strategic Technology Trends for 2020″ report). Hyperautomation was the most discussed topic at Gartner Symposium events over the past 60 days in Orlando, Barcelona, and Goa.

Hyperautomation is the combination of several process automation tools and technologies that increase the ability to automate more work. It starts with robotic process automation (RPA) at its core, and adds artificial intelligence (Al), process mining, analytics, and other advanced tools to expand automation capability. With hyperautomation, an organization can automate more and more knowledge work, and engage everyone to be part of the transformation.

We are excited about hyperautomation because it aligns with how we see the future of Robotic Process Automation (RPA).

Hyperautomation builds on the real success that underlies the phenomenal growth of RPA: ease of automation, speed to outcomes, and a now proven path to applying artificial intelligence (AI) to improve business operations. RPA has delivered digital transformation faster than any other technology by starting with the business first and foremost.

RPA is now moving beyond task-based automation to include support for more complex processes and long running workflows. Software robots can now interact with business users across core business processes, directly impacting customer experience and satisfaction. In other words, a future where anything that can be automated most likely will be.
Hyperautomation is key to scaling sophisticated automation across the enterprise with speed and efficiency

Automation first
An automation first mindset encourages users to imagine a future of work that prioritizes human engagement, creativity, and productivity. C-suites engaging an automation first mindset consider how an integrated workforce of humans and robots can best solve any new problem.

Windows Server 2008 and 2008 R2 support will end January 14, 2020- ask Synergy Software Systems about options.

November 16th, 2019

On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. Only 2 months away
That means the end of regular security updates.

Don’t let your infrastructure and applications go unprotected.

We’re here to help you migrate to current versions for greater security, performance and innovation.
009714 3365589

Azure Arc in preview manage hybrid data across cloud platforms……

November 16th, 2019

Now in preview, Azure Arc helps simplify enterprise distributed environments by managing everything via Azure services (like Azure Resource Manager). Connecting hybrid infrastructure via Azure Arc improves security for users via automated patching, and provides improved governance, with everything ‘under one roof’. Azure Arc, a tool that lets organizations manage their data on: the Microsoft Azure cloud, Amazon Web Services (AWS), Google Cloud Platform or any combination.

Microsoft says that deployments can be set up “in seconds” via Azure data services anywhere, a feature of Azure Arc.

Azure Arc also supports Kubernetes clusters and edge infrastructures, as well as on-premises Windows and Linux servers.
No final release date yet but there is a free preview of Azure Arc .


Microcode BIOS Updates coming from a Microsoft Update

November 13th, 2019

Intel Microcode Updates coming from a Microsoft Update or the Windows Catalog.
The security implications of why you should update the microcode on your processors are covered in these links

https://support.microsoft.com/en-us/help/4093836/summary-of-intel-microcode-updates

https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00233.html

https://www.amd.com/en/corporate/product-security

Microsoft is collaborating with Intel and AMD on these microcode updates.

When processors are manufactured, they have a baseline microcode baked into their ROM. This microcode is immutable and cannot be changed after the processor is built. Modern processors have the ability at initialization to apply volatile updates to move the processor to a newer microcode level. However, as soon as the processor is rebooted, it reverts back to the microcode baked into their ROM. These volatile updates can be applied to the processor one of two ways – System Firmware/BIOS via OEM and by the Operating System (OS). However, neither updates the microcode in the processors ROM. If you were to remove the processor from one computer and to install in a computer with an older System Firmware/BIOS and an un-updated OS, then you will again be vulnerable.

Windows offers the broadest coverage and quickest turnaround time to address these vulnerabilities. Microcode updates delivered via the Windows OS are not new; as far back as 2007 some updates were made available to address performance and reliability concerns.

You could jus take the OEM System Firmware/BIOS Updates, but often Microsoft Update hasthe microcode updates to address issues much sooner.

When the processor boots, it has versioning to make sure it is utilizing the latest microcode updates regardless of from where it came. Install of System Firmware/BIOS updates and microcode updates from Microsoft Update is therefore O.K. It is possible that the OEM updates the microcode to one level and the OS updates the microcode to an even higher level during the same boot.

Microcode updates install like any other update. They can be installed from Microsoft Update, WSUS, SCCM or manually installed if downloaded from the Catalog. The key difference is that the payload of the hotfix is primarily one of two files:

mcupdate_GenuineIntel.dll – Intel
mcupdate_AuthenticAMD.dll – AMD

These files contain the updated microcode and Windows automatically loads these via OS Loader to patch the microcode on the boot strap processor. This payload is then passed to additional processors as they startup as well the Hyper-V hypervisor if enabled.

Azure Synapse – BI for petabytes of data

November 11th, 2019

Microsoft introduced Azure Synapse Analytics last week as , the “next evolution” of Azure SQL Data Warehouse.

It promises better performance and more capabilities, than Azure SQL Data Warehouse, and existing customers will “automatically benefit” from the enhancements that are now in preview.

Azure Synapse is a “limitless” analytics service, and accommodates all data warehouses, data lakes, machine learning, and BI needs, either with a serverless or provisioned resources approach.

Among the benefits of Synapse:
• The service can query both relational and non-relationship data using SQL
• It can “apply intelligence” over all data, including Dynamics 365, Office 365 and SaaS services that support the Open Data Initiative
• It offers a unified experience for data prep, data management, data warehousing, big data, and AI
• Privacy and security features include: automated threat detection, always-on data encryption, column level security, and dynamic data masking

In an Ignite demo, Rohan Kumar, Microsoft corporate vice president for Azure Data claimed a petabyte scale query across multiple data sources ran in Synapse in 9 seconds, versus over eleven minutes in Google’s BigQuery. He also claimed that both AWS Redshift and BigQuery degrade more as concurrent load increase

Micorsoft Power Platform Enhancements -ask Synergy Software Systems

November 11th, 2019

• Microsoft Flow is renamed to Microsoft Power Automate, to better align with the Microsoft Power Platform.
Robotic process automation (RPA) is added to Microsoft Power Automate, to deliver end-to-end automation solutions that span AI, APIs, and UI on the Microsoft Power Platform.
Microsoft Power Virtual Agents—a no-code/low-code app that allows anyone to create, and to deploy, intelligent, AI-powered, virtual agents.
Many security enhancements for Microsoft Power BI, no matter where analytics data is used and accessed.
Microsoft Power Platform and Microsoft Teams, As organizations encourage a data-driven culture, it’s important they break down silos and ensure that the right people in the organization have the data they need to be involved in the decision-making process. Teams and Power Platform brings together the best of workplace collaboration and data-driven business into one place.

Power Platform applications dashboards, apps, and automations are available within Teams, so they are easier to find, share, and use on an everyday basis. The conversational nature of Teams enhances how users interact with Power Platform applications. Adaptive cards and bots let users engage with these tools directly through conversation. This integration gives IT Administrators high fidelity control and prioritization of features.

Power Apps creators can publish their apps directly to their company’s app library in Teams. By the end of 2019, users will be able to pin Power Apps to their Teams left rail, to provide easy access to regularly used apps.

New triggers and actions for Power Automate are available within Teams to streamline the completion of common team and personal tasks, such as scheduling focus time, and automating document approvals.

New features coming to Power BI next year include the ability to create rich adaptive cards in Teams conversations, to help users see and act on their data. An improved Power BI tab experience in Teams will make it easy to select the correct reports..

The American Red Cross is leveraging Power Platform integration with Teams to improve disaster response times.

• New, prebuilt models for AI Builder to add more advanced AI models to Microsoft Power Automate and Microsoft Power Apps.

These new features and products provide the Power Platform with an unmatched set of capabilities that enable everyone to: analyze, act, and automate across their organization, so as to transform businesses from the ground up.

Enhanced HA and DR benefits for SQL Server Sofware Assurance from 1 November.

November 5th, 2019

The enhanced benefits to SQL licensing for high availability and disaster recovery that are listed below are now applicable to all releases of SQL Server for a customer with SQL Server licenses with Software Assurance. The updated benefits will be available in the next refresh of the Microsoft Licensing Terms.

Business continuity is a key requirement for planning, designing, and implementing any business-critical system. When you bring data into the mix, business continuity becomes mandatory. It’s an insurance policy that one hopes they never have to make a claim against in the foreseeable future. SQL Server brings intelligent performance, availability, and security to Windows, Linux, and containers and can tackle any data workload from BI to AI from online transaction processing (OLTP) to data warehousing. You get mission-critical high availability and disaster recovery features that allow you to implement various topologies to meet your business SLAs.

A customer with SQL Server licenses with Software Assurance has historically benefited from a free passive instance of SQL Server for their high availability configurations. That helps to lower the total cost of ownership (TCO) of an application using SQL Server. Today, this is enhanced for the existing Software Assurance benefits for SQL Server which further helps customers implement a holistic business continuity plan with SQL Server.

Starting Nov 1st, every Software Assurance customer of SQL Server will be able to use three enhanced benefits for any SQL Server release that is still supported by Microsoft:
• Failover servers for high availability – Allows customers to install and run passive SQL Server instances in a separate operating system environment (OSE) or server for high availability on-premises in anticipation of a failover event. Today, Software Assurance customers have one free passive instance for either high availability or DR
• Failover servers for disaster recovery NEW – Allows customers to install and run passive SQL Server instances in a separate OSE or server on-premises for disaster recovery in anticipation of a failover event
• Failover servers for disaster recovery in Azure NEW – Allows customers to install and run passive SQL Server instances in a separate OSE or server for disaster recovery in Azure in anticipation of a failover event

With these new benefits, Software Assurance customers can implement hybrid disaster recovery plans with SQL Server using features like Always On Availability Groups without incurring additional licensing costs for the passive replicas.

A setup can use SQL Server running on an Azure Virtual Machine that utilizes 12 cores as a disaster recovery replica for an on-premises SQL Server deployment using 12 cores. In the past, you would need to license 12 cores of SQL Server for the on-premises and the Azure Virtual Machine deployment. The new benefit offers passive replica benefits running on an Azure Virtual Machine. Now a customer need to only license 12 cores of SQL Server running on-premises as long as the disaster recovery criteria for the passive replica on Azure Virtual Machine is met.

If, the primary. or the active replica. uses 12 cores hosting two virtual machines and the topology has two secondary replicas: one sync replica for high availability supporting automatic failovers and one asynchronous replica for disaster recovery without automatic failover then . the number of SQL Server core licenses required to operate this topology will be only 12 cores as opposed to 24 cores in the past.

These high availability and disaster recovery benefits will be applicable to all releases of SQL Server. In addition to the high availability and disaster recovery benefits, the following operations are allowed on the passive replicas:
• Database consistency checks
• Log backups
• Full backups
• Monitoring resource usage data

SQL Server 2019 also provides a number of improvements for availability, performance, and security along with new capabilities like the integration of HDFS and Apache Spark™ with the SQL Server database engine.

HA for UiPAth Orchestrator

November 2nd, 2019

High availability and geo-redundancy are popular topics as our customers scale their Robotic Process Automation (RPA) infrastructure, and the business impact of robot downtime becomes greater.
“high availability” means having two or more instances of a service running and synchronized so that if one fails, the service remains available.
“Geo-redundancy” is a specific approach that places those instances in different countries, to protect against potentially more significant threats.

For on-premises, third-party, and hybrid configurations, the new solution form UiPath is the High Availability add-on for UiPath Orchestrator. It’s built on, and replaces, REDIS, which many customers have used previously – but it comes from UiPath and delivers on key capabilities customers have been asking for:
• A fully supported, enterprise-grade UiPath redundancy solution
• Active-Active failover support with local latency
• Easy upgradability from REDIS (open source or Enterprise), our previous recommendation
• Scalability from two UiPath Orchestrator instances up to the largest global installations

Orchestrator, lets you optimize your robot workforce. Among other things, Orchestrator prioritizes (and in some cases initiates) robots’ work, shares information between them, and lets you manage, monitor, and audit robots from one place. With Orchestrator, software robots do more for your business, and as your business scales robot use, Orchestrator itself becomes more and more critical to your RPA environment.

One Orchestrator can handle a lot of robots in a typical configuration, but any product running on a single server is vulnerable to failure if something happens to that server. This is where the High Availability add-on is like an insurance policy that keeps your robot fleet running. The High Availability add-on enables you to add a second Orchestrator server to your environment that is always fully synchronized with the first server. If anything happens to one of the servers (from disaster to planned maintenance), then the workload is picked up seamlessly by the other. Your robots carry on supporting your business as though nothing happened.

The High Availability add-on can be used further to enable geo-redundancy: two or more Orchestrator servers can be installed in multiple countries, providing local latencies under normal conditions. However, if one of those servers fails, the others immediately pick up the workload.If you are running Orchestrator in a third-party cloud, then you may be less concerned about hardware failure – but you may still want to regionalize your Orchestrators and protect against needing to take one offline, especially when your compliance requirements mandate the ability to provide a full audit 24/7.

The High Availability add-on is always required when you want to run more than one connected and synchronized Orchestrator instance, whether you are running these on your own physical hardware, virtualized hardware in the cloud, or a hybrid scenario.

SnapLogic iPasS integration as a service – from Synergy Software Systems.

October 20th, 2019

Business Intelligence Managers/Analysts, Data/ETL Engineers, and Information/Data Architects are tasked with empowering business users to make use of
data to drive smart decisions and innovations. Data-driven initiatives can be challenging considering the explosion of data volumes due to the proliferation of sensors, IoT, and mobile computing.

Moreover, a growing number of groups within the business want access to fresh data.

To fully harness their data, organizations must also have a cloud strategy for their digital transformation efforts, namely to migrate data from
on-premises environments to the cloud. Considering the tremendous business value of unlocking that data, it’s imperative to prioritize and streamline these
data integration and migration projects.

Gone are the days when IT needed hundreds of coders to build extract, transform, load (ETL) solutions and then maintain those by writing more code. Modern integration platforms eliminate the need for custom coding. Now, data integration projects deploy and scale, often as much as ten times faster.

iPaaS platforms ease the pain because they’re designed for flexibility and ease of deployment for any integration project. A drag-and-drop UX coupled with a powerful platform and hundreds of pre-built connectors out of the box.

The connectors are always up-to-date, so the IT organization doesn’t spend an inordinate amount of time maintaining every integration by hand. This saves an incredible amount of time, money, and frustration across the team and projects and greatly reduces risk.

Not all integration platforms are created equal. Some do simple point-to-point cloud app integrations while others transform large and complex data into a data lake for advanced analytics. Some stgill require extensive developer resources to hand-code APIs while others provide self-service, drag-and-drop offerings that can be used by IT and business leaders alike. Some are best for specific tactical projects while others provide a strategic, enterprise-wide platform for multi-year digital transformation projects.

Organizations must address four key steps during the data migration and integration process:
1. Capture data that supports both the known use cases as well as future undefined use cases (think IoT data to support a future machine learning
enabled use case).
2. Conform inbound data to corporate standards to ensure governance, quality, consistency, regulatory compliance, and accuracy for downstream
consumers.
3. Refine data for its eventual downstream application and/or use cases (once its been captured and conformed to corporate standards).
4. Delivery of data needs to be broad and prepared to support future unknown destinations.

For decades, IT has been tasked to manage integration projects by writing tons of custom code. This onerous task is even more complex with the proliferation of SaaS applications, the surge in big data, the emergence of IoT, and the rise of mobile devices. IT’s integration backlog has exploded. Not only is the deployment too much work, but there is a growing cost to maintain all of the integrations.

Deploying a tactical or departmental data warehouse solution should take days, not months. Moreover, enterprise-wide data transformation projects should take months, not years.

The best data integration platforms:
- Support multiple app and data integration use cases across cloud, on-premises, and hybrid deployments
- Offer the flexibility to be used in cloud, hybrid, or on-premises environments, regardless of the execution location
- Provide a self-service user experience aided by AI, machine learning, hundreds of pre-built connectors, and integration pipeline
templates (patterns) resulting in greater user productivity, and faster time-to-integration
- Have an underlying, scalable architecture to grow with evolving data and integration requirements
- Support different data modes such as streaming, event-driven, real-time or batch

The SnapLogic iPaaS offering is functionally rich and well-proven for a variety of use cases. It supports hybrid deployments and provides rich and differentiating features for analytics and big data integration (Hadooplex). Clients score SnapLogic as above average for cloud characteristics, functional completeness, ease of use and ability to meet SLAs.” Gartner

SnapLogic is a U.S.-based integration platform company. In mid-2013, it transitioned from a traditional software business to an iPaaS model with the release of the SnapLogic Elastic Integration Platform which provides a large set of native iPaaS capabilities that target the cloud service integration, analytics and big data integration use cases.

The flagship Enterprise Edition features a set of base adapters (Snaps), an unlimited number of connections and unlimited data volume.

Synergy Software Systems has been an Enterprise Solutions Integrator in the GCC since 1991. We are pleased to announce our formal partnership to represent Snap Logic in the MEA region.

Do you need to integrate with Azure? with SAP Data Warehouse Cloud? with Workday? With Odette compliant auto mamufacturers………..?.

To learn more call us on 009714 3365589

Microsoft’s Newest Flight Simulator- Wow!

October 14th, 2019

To get a feel for how much technology has advanced in recent years see this post

http://inspire.eaa.org/2019/09/30/an-inside-look-at-microsofts-newest-flight-simulator/

“The new simulator models the entire planet, including something like 40,000 airports worldwide. ” ……” The world in the new version consists of 2 petabytes of data — yes, that’s one thousand times bigger. The scenery is built on Bing satellite and aerial imagery, augmented with cool buzzwordy stuff like photogrammetric 3D modeling and multiple other data sources, all of which is streamed via Microsoft’s Azure cloud service

” …….of the scenery areas boast a resolution of 3 centimeters per pixel” ……… ” Throw in 1.5 trillion trees, individual blades of grass modeled in 3D, and a complete overhaul of lighting and shadows, and the result is an unprecedented level of detail for a flight simulator of any kind. True VFR flight, day or night, with real-world landmarks is now possible everywhere on the planet.

Windows- no more security updates for older versions of Windows 10

October 12th, 2019

On Wednesday last week Microsoft warned that organizations running Windows 10 version 1703 will stop getting “quality updates” (security and nonsecurity patches) on Oct. 8, 2019.

Organizations running Windows 10 version 1803 will stop getting quality updates on Nov. 12, 2019.
So our advice is to upgrade to the latest version to continue to get patch support from Microsoft.

Patching systems is more complicated with Windows 10. Microsoft admitted a couple of year ago that it had stopped testing its quality updates when it released Windows 8 and it reduced its app testing. It now follows more of a combination of a DevOps ‘crowd -QA’ feedback approach and its “telemetry” data collection, to hold off delivering potentially problematic upgrades. It hasn’t been a problem-free approach, and there have been failures in delivering problem-free patches. Enterprise organizations also have more complex software environments to patch for in its own testing.

If Microsoft continues as indicated down the road of making Windows a service billed on consumption, then it may have to look close at the relationship between QA and its SLAs

5G – performance and security

October 12th, 2019

5G is now available in many countries. Gartner research indicates that 66 per cent of organisations plan to deploy 5G by 2020, while 59 per cent intend to include IoT communications with 5G. However, many enterprises are underprepared to cope with the levels of data to be captured, shared, analysed, and stored in near-real time for its benefits to be realised.

The promise is higher bandwidth delivered at lower latency offers huge potential. predecesso. 4Ghas a current maximum download speed of 100 Mbps. 5G is expected to deliver 10 Gbps; download speeds 100 times faster than 4G. Any kind of lag in transmitting information for real-time decision-making can be problematic. ERP systems notoriously suffer when they scale up to multi geographies. Because of latency, a connection or transaction might timeout before being committed. To minimise the risk thin client connections are often adopted

When data travels thousands of miles to a cloud data centre, via multiple isp providers latency is a very real concern. Edge computing address this issue with micro data centres just outside the network i.e. at “edge.” to place compute and analytics power as close to the action as possible, to enable real-time decision-making. Data that doesn’t have to be processed immediately is routed to a cloud data centre for later use.

Consider self driving cars as IoT endpoints, band lack boxes at crossroads as edge computing points, with communication secured via 5G. Context is everything. Without the ability to instantly analyse and act upon data in context, 5G will offer less value. An isolated measurement e.g. a plane is cruising at 10,000 feet isn’t very useful, unless you know it was at 36,000 less than one minute beforehand. This contextual analysis is vital to get the best from IoT. Data still needs to be processed centrally. Edge computing can provide, more rapid, but more superficial insights, but it does not give context-rich data insights. So, while there’s a clear need to be prepared to collect data from distributed sources, there’s also a need to manage and act upon it centrally.

5G is here to stay, and the number and types of devices that will connect to the network will continue to increase
So how do you optimise and secure its use for different business requirements, First identify the requirements, the options and the threats.

Custom network configuration — known as network slicing is the most significant difference between 4G and 5G. In previous generations of wireless, the networks were configured universally, and each customer, and each network operator had few options for customisation or optimisation of their network configuration.

Vertical applications, each require different levels of performance and network slicing allows operators to offer customised resources for the different types of applications running on their network. The European Commission and the European Agency for Cybersecurity issued a report earlier this week,argues that, for 5G to work, telecommunications companies will increasingly rely on software, for things like network virtualisation and slicing.

A healthcare provider require, high reliability and low latency resources in their networked applications, redundancy for applications like remote surgery or heart monitoring run on the network.

For an IoT application, the needs are different. The of IoT data flowing over the network will be small amounts of data from many sensors. While reliability is still a factor, more important may be the management of a very high number of connections. Mobility management might not be particularly important for a large IoT network, because the connected devices might not move. Without the need to provision mobility for a network of millions of devices, the operator can save on capital expenditure.

According to a the European report, vendor lock-in, and potential cybersecurity are weak spots for telecommunications companies all over the world looking to integrate 5G, Given the USA- China trade war controversy this year its not surprising that many in media argue that the report’s concerns are targeted at Huawei even though the company’s name was never mentioned.

If, due to a lack of skilled staff, the telcos look to outsource to suppliers for soft
zware support, then will they put their entire operation at risk? “The increased role of software and services provided by third party suppliers in 5G networks leads to a greater exposure to a number of vulnerabilities that may derive from the risk profile of individual suppliers,” the report states. .”Major security flaws, such as those deriving from poor software development processes within equipment suppliers, could make it easier for actors to maliciously insert intentional backdoors into products and make them also harder to detect. This may increase the possibility of their exploitation leading to a particularly severe and widespread negative impact.”

The report argues that, countries should not only assess at the technical qualities of their potential suppliers but also analyse the “non-technical vulnerabilities related to 5G networks”, which includes having connections and/or doing business with the government. Lack of legislation, or “democratic checks and balances”, as well as the lack of security or data protection agreements between the supplier’s country and the EU are other concerns.

“……..hostile third countries may exercise pressure on 5G suppliers in order to facilitate cyberattacks serving their national interests,” the report states. “The degree of exposure to this risk is strongly influenced by the extent to which the supplier has access to the network, in particular its most sensitive assets, and by the risk profile of the individual supplier.”

If your digital journey includes scalability, IoT, Big Data and Predictive analytics, RPA and cross platform process automation then both 5g and Wifi6 will be enablers and their secure management and the overall solution architecture needs to be considered holistically.
.