Archive for the ‘SharePoint and EPM’ category

Inside a Microsoft cloud data centre with Synergy Software Systems

November 22nd, 2017

Get the reach and local presence you need with Microsoft’s global datacenters – https://azure.microsoft.com/en-us/regions/ Azure is generally available in 36 regions around the world, with plans announced for 6 additional regions.

Go beyond the limits of your on-premises datacenter using the scalable, reliable infrastructure that powers the Microsoft Cloud.

Transform your business and reduce maintenance costs with an energy-efficient infrastructure spanning more than 100 highly secure facilities worldwide, linked by one of the largest networks on earth.

The engine that powers Microsoft’s cloud services, the is designed to support smart growth, high reliability, operational excellence, cost-effectiveness, environmental sustainability, and a trustworthy online experience for customers and partners worldwide.

Microsoft deliver the core infrastructure and foundational technologies for Microsoft’s over numerous online businesses including: Dynamics 365, Power Bi, Cortana analytics, IoT, Bing, MSN, Office 365, Xbox Live, Skype, OneDrive and the Windows Azure platform.

The infrastructure is comprised of a large global portfolio of more than 100 datacenters and 1 million servers, content distribution networks, edge computing nodes, and fiber optic networks.

The portfolio is built and managed by a team of subject matter experts working 24x7x365 to support services for more than 1 billion customers and 20 million businesses in over 90 countries worldwide

Those are 2014 figures and the Microsoft cloud has expanded greatly since then for example the acquisition of Linked in and the launch of Dynamics 365.

To help you comply with national, regional, and industry-specific requirements governing the collection and use of individuals’ data, Microsoft offers the most comprehensive set of compliance offerings of any cloud service provider. Microsoft business cloud services operate with a cloud control framework, which aligns controls with multiple regulatory standards (https://www.microsoft.com/en-us/trustcenter/guidance/risk-assessment#Audit-reports)

Argentina PDPA – Microsoft has implemented the security measures in the Argentina Personal Data Protection Act.

BIR 2012 – Agencies operating in the Netherlands government sector must comply with the Baseline Informatiebeveiliging Rijksdienst standard.

Canadian Privacy Laws – Microsoft contractually commits to implementing security that helps protect individuals’ privacy.

CCSL (IRAP) – Microsoft is accredited for the Australian Certified Cloud Services List based on an IRAP assessment.

CDSA – Azure is certified to the Content Delivery and Security Assoc. Content Protection and Security standard.

China DJCP – Azure and Office 365 operated by 21Vianet are rated at Level 3 for information security protection.

China GB 18030 – Azure and Office 365 operated by 21Vianet are certified as compliant with the Chinese character standard.

China TRUCS – Azure and Office 365 operated by 21Vianet obtained Trusted Cloud Service certification.

CJIS – Microsoft government cloud services adhere to the US Criminal Justice Information Services Security Policy.

CS Mark (Gold) – Microsoft received the CS Gold Mark in Japan for Azure (IaaS and PaaS) and Office 365 (SaaS).

CSA STAR Attestation -Azure and Intune were awarded Cloud Security Alliance STAR Attestation based on an independent audit.

CSA STAR Certification – Azure, Intune, and Power BI were awarded Cloud Security Alliance STAR Certification at the Gold level.

CSA STAR Self-Assessment – Microsoft STAR Self-Assessment details how cloud services fulfill Cloud Security Alliance requirements.

DFARS – Microsoft Azure Government supports Defense Federal Acquisition Regulation (DFARS) requirements.

DoD – Microsoft received Department of Defense (DoD) Provisional Authorizations at Impact Levels 5, 4, and 2.

EN 301 549 – Microsoft meets EU accessibility requirements for public procurement of ICT products and services.

ENISA IAF – Azure aligns with the ENISA framework requirements through the CSA CCM version 3.0.1.

EU Model Clauses – Microsoft offers EU Standard Contractual Clauses, guarantees for transfers of personal data.

EU-U.S. Privacy Shield – Microsoft complies with this framework for protecting personal data transferred from the EU to the US.

FACT – Microsoft Azure achieved certification from the Federation Against Copyright Theft in the UK.

FDA CFR Title 21 Part 11 – Microsoft helps customers comply with these US Food and Drug Administration regulations.

FedRAMP – Microsoft was granted US Federal Risk and Authorization Management Program P-ATOs and ATOs.

FERPA – Microsoft aligns with the requirements of the US Family Educational Rights and Privacy Act.

FIPS 140-2 – Microsoft certifies that its cryptographic modules comply with the US Federal Info Processing Standard.

FISC – Microsoft meets the requirements of the Financial Industry Information Systems v8 standard in Japan.

GxP – Microsoft cloud services adhere to Good Clinical, Laboratory, and Manufacturing Practices (GxP).

HIPAA/HITECH – Microsoft offers Health Insurance Portability & Accountability Act Business Associate Agreements (BAAs).

HITRUST – Azure is certified to the Health Information Trust Alliance Common Security Framework.

IRS 1075 – Microsoft has controls that meet the requirements of US Internal Revenue Service Publication 1075.

ISO 9001 – Microsoft is certified for its implementation of these quality management standards.

ISO 20000-1:2011 – Microsoft is certified for its implementation of these service management standards.

ISO 22301 – Microsoft is certified for its implementation of these business continuity management standards.

ISO 27001 – Microsoft is certified for its implementation of these information security management standards.

ISO 27017 – Microsoft cloud services have implemented this Code of Practice for Information Security Controls.

ISO 27018 – Microsoft was the first cloud provider to adhere to this code of practice for cloud privacy.

IT Grundschutz Compliance Workbook – Azure Germany published this Workbook to help our clients achieve IT Grundschutz certification.

ITAR – Azure Government supports customers building US International Traffic in Arms Regs-capable systems.

MARS-E – Microsoft complies with the US Minimum Acceptable Risk Standards for Exchanges (MARS-E).

MeitY – The Ministry of Electronics and Info Technology in India awarded Microsoft a Provisional Accreditation.

MPAA – Azure successfully completed a formal assessment by the Motion Picture Association of America.

MTCS – Microsoft received certification for the Multi-Tier Cloud Security Standard for Singapore.

My Number (Japan) – Microsoft does not have standing access to My Number data, a number unique to each resident of Japan.

NEN 7510:2011 – Organizations in the Netherlands must demonstrate control over patient health data in accordance with the NEN 7510 standard.

NHS IG Toolkit – Azure is certified to the Health Information Trust Alliance Common Security Framework.

NIST 800-171 – Microsoft DoD certifications address and exceed US NIST 800-171 security requirements.

NIST CSF – Microsoft Cloud Services meet the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF)

NZ CC Framework – Microsoft NZ addresses the questions published in the New Zealand cloud computing framework.

PCI DSS – Azure complies with Payment Card Industry Data Security Standards Level 1 version 3.1.

Section 508 – Microsoft cloud services offer Voluntary Product Accessibility Templates.

Shared Assessments – Microsoft demonstrates alignment of Azure with this program through the CSA CCM version 3.0.1.

SOC 1- Microsoft cloud services comply with Service Organization Controls standards for operational security.

SOC 2 – Microsoft cloud services comply with Service Organization Controls standards for operational security.

SOC 3 – Microsoft cloud services comply with Service Organization Controls standards for operational security.

Spain ENS – Microsoft received Spain’s Esquema Nacional de Seguridad (National Security Framework) certification.

UK Cyber Essentials PLUS – Cyber Essentials PLUS is a UK government-defined scheme to help organizations protect against common cyber-security threats.

UK G-Cloud – The Crown Commercial Service renewed the Microsoft cloud services classification to Government Cloud v6.

WCAG 2.0 – Microsoft cloud services comply with the Web Content Accessibility Guidelines 2.0.

SQL Server 2012 Service Pack 4 (SP4) is available

October 16th, 2017

SQL Server 2012 Service Packs, Service Pack 4 (SP4). This release of SQL 2012 Service Pack has 20+ improvements centered around performance, scalability and diagnostics to enable SQL Server 2012 to perform faster and scale out of the box on modern hardware design.

SQL Server 2012 SP4 includes all the fixes up to and including SQL Server 2012 SP3 CU10

Security security security

September 26th, 2017

You never know when some item that queries or alters data in SQL Server will cause issues.

Bruce Schneier recently commented on FaceID and Bluetooth security, the latter of which has a vulnerability issue. I was amazed to see his piece on infrared camera hacking. A POC on using light to jump air gaps is truly frightening. It seems that truly anywhere that we are processing data, we need to be thinking (see https://arstechnica.com/information-technology/2017/09/attackers-can-use-surveillance-cameras-to-grab-data-from-air-gapped-networks/)

Airborne attacks, unfortunately, provide a number of opportunities for the attacker. First, spreading through the air renders the attack much more contagious, and allows it to spread with minimum effort. Second, it allows the attack to bypass current security measures and remain undetected, as traditional methods do not protect from airborne threats. Airborne attacks can also allow hackers to penetrate secure internal networks which are “air gapped,” meaning they are disconnected from any other network for protection. This can endanger industrial systems, government agencies, and critical infrastructure. With BlueBorne, attackers can gain full control right from the start. Moreover, Bluetooth offers a wider attacker surface than WiFi, almost entirely unexplored by the research community and hence contains far more vulnerabilities

Finally, unlike traditional malware or attacks, the user does not have to click on a link or download a questionable file. No action by the user is necessary to enable the attack.

Fully patched Windows and iOS systems are protected

– the Equifax breach for example must worry everyone who has ever had credit in the USA. (Hackers broke into Equifax’s computer systems in March, which is two months earlier than the company had previously disclosed, according to a Wall Street Journal report.)

The Securities and Exchange Commission said Wednesday that a cyber breach of a filing system it uses may have provided the basis for some illegal trading in 2016. In a statement posted on the SEC’s website, Chairman Jay Clayton said a review of the agency’s cybersecurity risk profile determined that the previously detected “incident” was caused by “a software vulnerability” in its EDGAR filing system (which processes over 1.7 million electronic filings in any given year.) The agency also discovered instances in which its personnel used private, unsecured email accounts to transmit confidential information.

So let me suggest take a good look at your systems and be honest – do you feel safe?

Microsoft has released Microsoft 365, a complete, intelligent solution, including Office 365, Windows 10, and Enterprise Mobility + Security, that empowers everyone to be creative and work together, securely. Watch Satya introduce it.

What about your websites?
Although acts of vandalism such as defacing corporate websites are still commonplace, hackers prefer to gain access to the sensitive data residing on the database server and then to sell the data.

The costs of not giving due attention to your web security are extensive and apart form direct financial burden and inconvenience also risks:
• Loss of customer confidence, trust and reputation with the consequent harm to brand equity
• Negative impact on revenues and profits arising e.g. from falsified transactions, or from
employee downtime
• Website downtime – is in effect the closure of one of the most important sales and marketing channels
especially for an e-business
• Legal battles and related implications from Web application attacks and poor security
measures including fines and damages to be paid to victims.

Web Security Weaknesses
Hackers will attempt to gain access to your database server through any way they can e.g. out of date protocols on a router. Two main targets are :
• Web and database servers.
• Web applications.

Information about such exploits are readily available on the Internet, and many have been reported on this blog previously.

Web Security Scanning
So no surprise that Web security should contain two important components: web and database server security, and web application security.

Addressing web application security is as critical as addressing server security.

Firewalls and similar intrusion detection mechanisms provide little defense against full-scale web
attacks.
Since your website needs to be public, security mechanisms allow public web traffic to
communicate with your web and databases servers (i.e. over port 80).

It is of paramount importance to scan the security of these web assets on the network for possible vulnerabilities. For example, modern database systems (e.g. Microsoft SQL Server, Oracle and MySQL) may be
accessed through specific ports and so anyone can attempt direct connections to the databases to try and bypass the security mechanisms used by the operating system. These ports remain open to allow communication with legitimate traffic and therefore constitute a major vulnerability.

Other weaknesses relate to the database application itself and the use of weak or default passwords by
administrators. Vendors patch their products regularly, and equally regularly find new ways of
attack.

75% of cyber attacks target weaknesses within web applications rather than directly at the
servers. Hackers launch web application attacks on port 80 . Web applications are more open to uncovered vulnerabilities since these are generally custom-built and therefore pass through a lesser degree of
testing than off-the-shelf software.

Some hackers, for example, maliciously inject code within vulnerable web applications to trick users
and redirect them towards phishing sites. This technique is called Cross-Site Scripting (XSS) and may
be used even though the web and database servers contain no vulnerability themselves.

Hence, any web security audit must answer the questions “which elements of our network
infrastructure are open to hack attacks?”,
“which parts of a website are open to hack attacks?”, and “what data can we throw at an application to cause it to perform something it shouldn’t do?”

Ask us about Acunetix and Web Security
Acunetix ensures web site security by automatically checking for SQL Injection, Cross Site Scripting,
and other vulnerabilities. It checks password strength on authentication pages and automatically
audits shopping carts, forms, dynamic content and other web applications. As the scan is being
completed, the software produces detailed reports that pinpoint where vulnerabilities exist

SQL Server 2014 SP2 CU7

September 3rd, 2017

The 7th cumulative update release for SQL Server 2014 SP2 is available for download at the Microsoft Downloads site.
Registration is no longer required to download Cumulative updates.

• CU#7 KB Article: https://support.microsoft.com/en-us/help/4032541/cumulative-update-7-for-sql-server-2014-sp2
• Microsoft® SQL Server® 2014 SP2 Latest Cumulative Update: https://www.microsoft.com/en-us/download/details.aspx?id=53592
• Update Center for Microsoft SQL Server: http://technet.microsoft.com/en-US/sqlserver/ff803383.aspx

GDPR Affects All European Businesses – What about the G.C.C. and U.A.E.?

August 19th, 2017

See our previous article on this topic for why your company may be affected if you are a branch of a European company, or have branches in Europe, or trade with a European company.

From May 25, 2018, companies with business operations inside the European Union must follow the General Data Protection Regulations (GDPR) to safeguard how they process personal data “wholly or partly by automated means and to the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing system.”

The penalties set for breaches of GDPR are up to 4% of a company’s annual global turnover.
For large companies like Microsoft that have operations within the EU, making sure that IT systems do not contravene GDPR is critical. As we saw on August 3, even the largest software operations like Office 365 can have a data breach.

Many applications can store data that might come under the scope of GDPR. the regulation has a considerable influence over how tenants deal with personal data. The definition of personal data is “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”
GDPR goes on to define processing of personal data to be “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.”

That means that individuals have the right to ask companies to tell them what of their personal data a company holds, and to correct errors in their personal data, or to erase that data completely.

Companies therefore need to:
- review and know what personal data they hold,
- make sure that they obtain consents from people to store that data,
– protect the data,
- and notify authorities when data breaches occur.

On first reading, this might sound like what companies do – or at least try to do – today. The difference lies in the strength of the regulation and the weight of the penalties should anything go wrong.

GDPR deserves your attention.

The definitions used by GDPR are broad. To move from the theoretical to the real world an organization first needs to understand what personal data it currently holds for its business operations, and where they use the data within software applications.

It is easy to hold personal information outside of business applications like finance and erp and crm e.g. inside Office 365 applications, including:
• Annual reviews written about employees stored in a SharePoint or OneDrive for Business site.
• A list of applicants for a position in an Excel worksheet attached to an email message.
• Tables holding data (names, employee numbers, hire dates, salaries) about employees in SharePoint sites.
• Outlook contacts, and emails. Skype business,
• Social media sites
• Loyalty programmes
• T@A systems
• E commerce sites
• Mobile apps e.g. What’s App

Other examples might include contract documentation, project files that includes someone’s personal information, and so on.

What backups do you have of the customer’s data?
What business data do your staff hold on BYOD devices e.g. in What’s App?

Data Governance Helps
Fortunately, the work done inside Office 365 in the areas of data governance and compliance help tenants to satisfy the requirements of GDPR. These features include:
• Classification labels and policies to mark content that holds personal data.
• Auto-label policies to find and classify personal data as defined by GDPR. Retention processing can then remove items stamped with the GDPR label from mailboxes and sites after a defined period, perhaps after going through a manual disposition process.
• Content searches to find personal data marked as coming under the scope of GDPR.
• Alert policies to detect actions that might be violations of the GDPR such as someone downloading multiple documents over a brief period from a SharePoint site that holds confidential documentation.
• Searches of the Office 365 audit log to discover and report potential GDPR issues.
• Azure Information Protection labels to encrypt documents and spreadsheets holding personal data by applying RMS templates so that unauthorized parties cannot read the documents even if they leak outside the organization.

Technology that exists today within Office 365 that can help with GDPR.

Classification Labels
Create a classification label to mark personal data coming under the scope of GDPR and then apply that label to relevant content. When you have Office 365 E5 licenses, create an auto-label policy to stamp the label on content in Exchange, SharePoint, and OneDrive for Business found because documents and messages hold sensitive data types known to Office 365.

GDPR sensitive data types

Select from the set of sensitive data types available in Office 365.
The set is growing steadily as Microsoft adds new definitions.
At the time of writing, 82 types are available, 31 of which are obvious candidates to use in a policy because those are for sensitive data types such as country-specific identity cards or passports.

Figure 1: Selecting personal data types for an auto-label policy (image credit: Tony Redmond)

GDPR Policy

The screenshot in Figure 2 shows a set of sensitive data types selected for the policy. The policy applies a label called “GDPR personal data” to any content found in the selected locations that matches any of the 31 data types.

Auto-apply policies can cover all Exchange mailboxes and SharePoint and OneDrive for Business sites in a tenant – or a selected sub-set of these locations.


Figure 2: The full set of personal data types for a GDPR policy (image credit: Tony Redmond)

Use classification labels to mark GDPR content so that you can search for this content using the ComplianceTag keyword (for instance, ComplianceTag:”GDPR personal data”).

Caveats:
It may take 1-2 week before auto-label policies apply to all locations.
An auto-label policy will not overwrite a label that already exists on an item.

A problem is that classification labels only cover some of Office 365. Some examples of popular applications where you cannot yet use labels are:
• Teams.
• Planner.
• Yammer.

Microsoft plans to expand the Office 365 data governance framework to other locations (applications) over time.
Master data management
What about all the applications running on SQL or other databases?
Master Data Management MDM is a feature of SQL since SQL 2012. However, when you have many data sources then you are relay into an ETL process and even with MDM tools the work is still significant.

If you have extensive requirements then ask us about Profisee our specialist, productized MDM solution built on top of SQL MDM that allows you to do much of the work by configuration.

Right of Erasure
Finding GDPR data is only part of the problem. Article 17 of GDPR (the “right of erasure”), says: “The data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay.” In other words, someone has the right to demand that an organization should erase any of their personal data that exists within the company’s records.

Content searches can find information about someone using their name, employee number, or other identifiers as search keywords, but erasing the information is something that probably also needs manual processing to ensure that the tenant removes the right data, and only that data.

You can find and remove documents and other items that hold someone’s name or other identifier belonging to them by using tools such as Exchange’s v Search-Mailbox cmdlet, or Office 365 content searches.
What if the the data ahs to be retained because the company needs to keep items for regulatory or legal purposes, can you then go ahead and remove the items?
The purpose of placing content on-hold is to ensure that no-one, including administrators, can remove that information from Exchange or SharePoint.

The GDPR requirement to erase data on request means that administrators might have to release holds placed on Exchange, SharePoint, and OneDrive for Business locations to remove the specified data. Once you release a hold, you weaken the argument that held data is immutable. The danger exists that background processes or users can then either remove or edit previously-held data and so undermine a company’s data governance strategy.

The strict reading of GDPR is that organizations must process requests to erase personal data upon request.
What if the company needs to keep some of the data to satisfy regulations governing financial transactions, taxation, employment claims, or other interactions? This is a dilemma for IT. Lawyers will undoubtedly have to interpret requests and understand the consequences before making decisions and it is likely that judges will have to decide some test cases in different jurisdictions before full clarity exists.

Hybrid is even More Difficult

Microsoft is working to help Office 365 tenants with GDPR. However, I don’t see the same effort going to help on-premises customers. Some documentation exists to deal with certain circumstances (like how to remove messages held in Recoverable Items), but it seems that on-premises customers have to figure out a lot things for themselves.

This is understandable. Each on-premises deployment differs slightly and exists inside specific IT environments. Compared to the certainty of Office 365, developing software for on-premises deployment must accommodate the vertical and company specific requirements with integrations and bespoke developments.

On-premises software is more flexible, but it is also more complicated.
Solutions to help on-premises customers deal with GDPR are more of a challenge than Microsoft or other software vendors wants to take on especially given the industry focus of moving everything to the cloud.

Solutions like auto-label policies are unavailable for on-premises servers. Those running on-premises SharePoint and Exchange systems must find their own ways to help the businesses that they serve deal with personal data in a manner that respects GDPR. Easier said than done and needs to start sooner than later.

SharePoint Online GitHub Hub

If you work with SharePoint Online, you might be interested in the SharePoint GDPR Activity Hub. At present, work is only starting, but it is a nway to share information and code with similarly-liked people.

ISV Initiatives

There many ISV-sponsored white papers on GDPR and how their technology can help companies cope with the new regulations. There is no doubt that these white papers are valuable, if only for the introduction and commentary by experts that the papers usually feature. But before you resort to an expensive investment, ask yourself whether the functionality available in Office 365 or SQL is enough.

Technology Only Part of the Solution

GDPR will effect Office 365 because it will make any organization operating in the European Union aware of new responsibilities to protect personal data. Deploy Office 365 features to support users in their work, but do not expect Office 365 to be a silver bullet for GDPR. Technology seldom solves problems on its own. The nature of regulations like GDPR is that training and preparation are as important if not more important than technology to ensure that users recognize and properly deal with personal data in their day-to-day activities.

Docs.com – New Microsoft documentation portal

August 6th, 2017

Microsoft Corporation has provisioned a new portal for all their application/framework documentation and are also placing the ability to add to that content.

The new docs.microsoft.com rolled out recently for their documentation for each of the following topic groups arranged by product line/focus:
• SQL
• Windows
• Microsoft Azure
• Visual Studio
• Office
• .NET
• ASP.NET
• Dynamics 365
• Enterprise Mobility + Security
• nuget
• Xamarin

So no longer navigate to msdn.microsoft.com. The central portal leads to additional functionality and information sharing not previously offered.

3 new Microsoft tools to help you to move to the cloud.

April 18th, 2017

Here’s a breakdown of the three new Microsoft tools to help you move to the cloud faster and what they can offer businesses.

1. Free cloud migration assessment

This assessment will help customers to more easily find and to better understand their current server setups, to help them to determine the cost and the value of moving to the cloud. Once the servers are discovered, the tool can analyze their configurations, and give the user a report of the potential cost drop of moving to Azure.

Data center administrators can export the results of the assessment into a customized report. The report could provide some valuable data and statistics for a CIO conversation with the CFO.

2. Azure Hybrid Use Benefit

This tool should save users money on their cloud deployments. Customers can activate the Azure Hybrid Use Benefit in the Azure Management Portal,It is available on Windows Server virtual machines in Azure, to all customers. “Use your on-premises Windows Server licenses that include Software Assurance to save big on Windows Server VMs in Azure. By using your existing licenses, you pay the base compute rate and save up to 40 percent.” the tool’s web page said,

3. Azure Site Recovery

Azure Site Recovery is meant to ease the process of migrating virtual machines to Azure. Applications running on AWS, VMware, Hyper-V, or physical servers can be moved. Additionally, a new feature in Azure Site Recovery will “allow you to tag virtual machines within the Azure portal itself, This capability will make it easier than ever to migrate your Windows Server virtual machines.”

Other features include automated protection and replication of virtual machines, remote monitoring, custom recovery plans, recovery plan testing, and more

Azure infographic

April 3rd, 2017

What can you do on the azure cloud?
Ask Synergy Software Systems: 0097143365589

Dynamics 365 launch event Microsoft Gulf

February 5th, 2017

Last week Synergy staff attended the Dynamics 365 regional launch day. This gave insights into the Microsoft Dynamics solution portfolio. Siegfried Leiner the Principal Program Manager, Dynamics CRM Microsoft gave a ‘deep dive’ key note speech.

The event was kicked off by Samer Abu Ltaif Regional General Manager, Microsoft Gulf and Karim Talhouk
Regional Director, Microsoft Business Solutions, Microsoft Gulf who presented how digital transformation is happening in the Gulf. Steve Plimsoll Chief Digital and Data Officer, PWC, and Harris Mygdalis Deputy CIO, Commercial Bank of Dubai gave further insights.

This well attended event attracted customers with a wide range of requirements. Mobility, analytics, and integration were common themes.

Project failure? – what wait of find out, try a pre-mortem.

January 23rd, 2017

A recent survey by PwC showed that over 85% of Dynamics AX projects failed to achieve their core objectives.

Implementing Dynamics AX has been compared to performing open heart surgery on an organisation, where the stakes couldn’t be higher, and so the ‘physicians’ that are entrusted to give the ‘patient’ it’s new lease of life, need to be masters in their craft and highly experienced

the same is true of most erp systems, and indeed of most projects. There si always over optimism, and over confidence, and an assumptions of perfections, despite the copious evidence that other good companies made the same errors of judgement.

So before you start a project rather than wait to see how it pans out and then do a post mortem, instead do try holding a pre-mortem. Assume it did no go so well. Ask your project team/stakeholders to write down why.

In the re-project phase one a senior executive gives a green light to go ahead, dissenting voices tend to go quiet. Costs are negotiate down rather than risk management, contingency and quality built in.

A pre-mortem can help you find what people really think and inject a touch of realism, about challenges, realistic scope, time and resources needed. With concerns identified they can be addressed and the team who has to deliver will be much more committed with their concerns out in the open and a less rosy tinted outlook.

Microsoft organisational changes from 1 February 2017.

January 11th, 2017

Microsoft is combining its Small and Mid-Market Solutions & Partners (SMS&P) and Enterprise Partner Group (EPG) business units in an attempt to streamline business processes. The changes, which will take effect from February 1, will affect its sales, partner, and services teams, and will see both units come together as one under its Worldwide Commercial Business, led by executive vice-president, Judson Althoff. Corporate vice-president of mid-market solutions and partners, Chris Weber, will lead the combined business.

This seems to echo former CEO Steve Ballmer’s 2013 One Microsoft plan. No layoffs are expected

In Australia, Mark Leigh runs the SMS&P business after David Gage resigned from the role. As for its local EPG business, the head of the unit is yet to be filled as Steven Worrall was given the managing director title after Pip Marlow left the company. However, how these changes will affect Microsoft Australia and New Zealand are yet to be determined.

This move follows the recent departure of then Microsoft chief operating officer, Kevin Turner, whose role was not replaced and was split amongst five senior executives including Althof. As part of that restructure, Althoff was handed the Worldwide Commercial Business, focusing on the Enterprise and Partner Group, Public Sector, Small and Midmarket Solutions and Partners, the Developer Experience team, and services.

The company restructure also sees the creation of a new One Commercial Partner business, which combines various partner teams within Microsoft; a unit called Microsoft Digital, which is expected to grow Microsoft’s cloud division; and the merger of its Worldwide Public Sector and Industry businesses. it will be led by former Salesforce vice president and Microsoft’s current Corporate Vice President of Enterprise Partner Ecosystem, Ron Huddleston. The new group called Microsoft Digital will push Microsoft’s current customers and partners to use the company’s cloud programs. Anand Eswaran, corporate vice president of Microsoft Services, will lead that group.

Corporate Vice President of Worldwide Public Sector Toni Townes-Whitley will lead a combined group comprising Microsoft’s Worldwide Public Sector and Industry Businesses

Jeff Teper, who was Microsoft’s corporate vice president of corporate strategy, announced on Twitter last week he now leads the company’s OneDrive and SharePoint teams. It’s a familiar role, as Teper led the group that first built SharePoint for its 2001 launch. The move seems to be the latest to make room for Kurt DelBene, who was brought back to the executive team after retiring in 2013 to help the U.S. government fix the healthcare.gov website. DelBene assumed a new title as executive vice president of corporate strategy and planning in April. (Soon after, Eric Rudder, executive vice president of advanced strategy, and Mark Penn, executive vice president of advertising and strategy, announced they would be leaving Microsoft.)

David Treadwell, a longtime Microsoft executive who oversaw the Windows engineering team, is also on the move. He’s taking an unidentified role in the Cloud and Enterprise group. Treadwell told staff he was reluctant to leave the Windows team, but “when the CEO calls, well, you take that call.”

According to Microsoft’s announcement, Kim Akers and the ISV team, Victor Morales and the Enterprise Partner team, and Gavriella Schuster and the WPG team will all be moving into One Commercial Partner.

Azure – what is it exactly?

January 8th, 2017

You may have recently seen a television commercial for “The Microsoft Cloud,” which featured Healthcare, Cancer Research, and Cybercrime. So, what does this have to do with Microsoft Azure?

Microsoft Azure is the Microsoft product name for the Microsoft Cloud. The names are used synonymously in the technical industry.

The Cloud digital transformational shift, question remains, “What is Azure, and for whom is it meant?”

Azure was announced in October 2008 and released on February 2010 as Windows Azure, and was then renamed to Microsoft Azure in March 2014.

Azure is a cloud computing platform plus, the underlying infrastructure and management services created by Microsoft to build, deploy, and manage applications and services through a global network of Microsoft-managed data centers.

What Microsoft Azure Data Centers?

There are 34 interconnected Microsoft Data Regions around the world with more planned.

Microsoft describes Azure as a “growing collection of integrated cloud services, including analytics, computing, database, mobile, networking, storage, and web.” Azure’s integrated tools, pre-built templates and managed services simplify the task of building and managing enterprise applications (apps).

Microsoft Corp. CEO Satya Nadella calls Azure, “the industry’s most complete cloud — for every business, every industry and every geography.”

The Complete Cloud

For many businesses, their first foray into leveraging cloud software as a service (SaaS) is with Microsoft Office 365, Exchange online for hosted email, or CRM online for managing business and customer relationships. However, the Azure platform is much more than just an online business software delivery platform.

Here are just a few of the things that you can do with Azure:
• Build and deploy modern, cross platform web and mobile applications.
• Store, backup and recover your data in the cloud with Azure-based disaster recovery as a service (DRaaS).
• Run your Line of Business applications on Azure.
• Run large scale compute jobs and perform powerful predictive analytics.
• Encode, store and stream audio and video at scale.
• Build intelligent products and services leveraging Internet of Things services.

Use Azure, and your partner, to rapidly build, deploy, and host solutions across a worldwide network and to create hybrid solutions which seamlessly integrate on premise existing IT with Azure.

Many leverage Azure to protect data and meet privacy standards like the new international cloud privacy standard, ISO 27018, or HIPAA.

Azure customers can quickly scale up infrastructure, just importantly, scale it down, while only paying for what they use.

Azure also supports a broad selection of operating systems, programming languages, frameworks, tools, databases and devices.

Contrary to the perception that Azure is for Windows only, nearly 1 in three Azure virtual machines are Linux.3

Widespread Adoption

More than 80 percent of Fortune 500 companies rely on Azure, which offers enterprise grade SLAs on services. In addition, Microsoft is the only vendor positioned as a Leader across Gartner’s Magic Quadrants for Cloud Infrastructure as a Service (IaaS), Application Platform as a Service (PaaS), and Cloud Storage Services for the second consecutive year.1

What is Microsoft Azure IOT

Microsoft’s powerful Azure Internet of Things Hub and tool suite has also been widely adopted for use in commercial and scientific applications to securely connect and manage Internet of Things (IoT) assets. The service processes more than two trillion IoT messages weekly.4

From broadcasting the Olympics to building massively multiplayer online games, Azure customers are doing some amazing things, and in increasing numbers. Microsoft recently revealed that the rate of Azure customer growth has accelerated to more than 120k new Azure customer subscriptions per month.4 In line with the accelerated adoption, the company is projecting an annualized commercial cloud revenue run rate of $20 Billion in 2018.3

Cloud Leadership

With Azure, Microsoft has made a huge commitment to cloud computing. Since opening its first datacenter, Microsoft has invested more than $15 billion in building its global cloud infrastructure.5 In addition, the company recently announced it would build its first Azure data center in France this year as part of a $3 billion investment to build its cloud services in Europe.6

Microsoft is quickly closing the gap in market share with IaaS provider Amazon Web Services, (AWS). While 37.1% of IT professionals surveyed indicated that Amazon AWS is their primary IaaS platform, Microsoft Azure is a close second at 28.4%, followed by Google Cloud Platform at 16.5%.7

and hot off the press…….
Microsoft isn’t building its own connected car — but it is launching a new Azure-based cloud platform for car manufacturers to use the cloud to power their own connected-car services.

The new Microsoft Connected Vehicle Platform will go live as a public preview later this year.
“This is not an in-car operating system or a ‘finished product’,” Microsoft’s EVP for business development Peggy Johnson writes in this week’s announcement. “It’s a living, agile platform that starts with the cloud as the foundation and aims to address five core scenarios that our partners have told us are key priorities: predictive maintenance, improved in-car productivity, advanced navigation, customer insights and help building autonomous driving capabilities.”

Microsoft also announced that it is partnering with the Renault-Nissan Alliance to bring the new connected-car services to Renault-Nissan’s next-gen connected vehicles. The two companies were already working together on other projects before this, so it’s maybe no surprise that Renault-Nissan is Microsoft’s first partner.
Microsoft is also working with BMW to develop that company’s BMW Connected platform on top of Azure. BMW and Nissan also showed in-car integrations with Microsoft’s Cortana digital assistant at CES this year, so your future car could potentially use Cortana to power its voice-enabled services. For the time being, though, it looks like these are still experiments.

Microsoft has talked about its aim to bring “intelligence” to as many of its services as possible. It has also recently opened up Cortana to third-party developers, so bringing it to its connected car platform is a logical next step (and we’re also seeing Amazon doing the same thing with Alexa, ).

Johnson also used today’s announcement to take a thinly veiled swipe at Google/Alphabet, which spun out its self-driving car unit a few weeks ago. “As you may have gathered, Microsoft is not building its own connected car,” she writes. “Instead, we want to help automakers create connected car solutions that fit seamlessly with their brands, address their customers’ unique needs, competitively differentiate their products and generate new and sustainable revenue streams.”

Powershell – why?

December 19th, 2016

What is the point of PowerShell?

It handles any task that requires scripting and gives power back to the user, developer, or administrator. Power Shell is a tool that is at a very high level of abstraction, and thus can quickly provide a means of getting a task done by creating a chain of software tools without resorting to writing a compiled application.

PowerShell is an: extensible, open-source, cross-platform object-oriented scripting language that uses .NET.

It can use: COM, WMI, WS-Management and CIM to communicate with, and interact with, any Windows-based process.
It can execute scripts on either a local workstation or remotely.

It is ideal for automating all sorts of processes, and is simple enough to manage your workstation, and yet robust enough to manage SQL Azure.

It will evolve to become the built-in batch processing system in future versions of Windows.

It is important as a configuration management tool and task automation tool, and is versatile enough to be used as a general-purpose programming language.

How did PowerShell come about?

Unix inherited from its mainframe ancestors the use of the batch and its script.
The use of the script allowed UNIX to develop a group of specialized applications that did one job and did it well.
Data could be passed into an application through its standard input, and the results passed to the standard output which meant that data could be streamed like items on a conveyor belt.

It was like building repeatable processes out of Lego like blocks of code.

Scripting did more than encourage piped streams of data. It also encouraged batches and command-line configuration of machines and services. This made it easy to use Unix for servers, because all administration tasks could be scripted.

Scaling up to large groups of servers was smooth since everything was in place to allow it to happen.

Scripting also made the management of servers more precise and error-free. After the script was developed, no further work was needed. The script would do the same thing in the same order with the same result. It made operations work a lot easier.

Think tKorn Shell, with ideas from Bash shell.

Why didn’t Windows have a powerful scripting language like Korn?

Unlike the contemporary UNIX workstations, The first PCs had no pretensions to host server processes. Those were low-specification affordable personal computers and initially conquered the market previously occupied by dedicated word processors, before becoming ubiquitous with the invention of the spreadsheet.
They had the ability to run batches, but this was intended merely to ease the task of installing software. Scripting just seemed old-fashioned or best left to dos.

Microsoft DOS could and did run batches from the command processor, and autoexec.bat is still there in Windows (called AUTOEXEC.NT and located in the %SystemRoot%\system32 directory).

After MSDOS borrowed from the UNIX clone Xenix, this command processor took on some of the features of UNIX shells such as the pipe, but with limited functionality when compared to the UNIX shells.

Microsoft Windows was originally booted from the command processor, and , in later editions, it took over the tasks of the operating system and incorporated the old MSDOS command-line interface tool (shell).

The features of the batch were sufficient to allow it to do a lot of configuration, installation and software maintenance tasks. The system wasn’t encouraged or enhanced after Xenix was abandoned, but remained a powerful tool. Xenix’s replacement, Windows NT or WNT (add a letter to DEC’s VMS to guess its parent.) did not have anything new for the command processor, and inherited MSDOS’s enhanced version from MSDOS 3.3.

This Batch language still exists in the latest versions of Windows, though it is due to be deprecated. It has had quite a few enhancements over the years but essentially what came into MSDOS is still the basis of what is currently shipped. It has a major failing within a Windows environment that it cannot be used to automate all facets of GUI functionality, since this demanded at least COM automation, and some way of representing data other than text.

There have been attempts to replace the DOS batch file technology, including VBS and Windows Script Host (1998), but PowerShell is by far the most effective replacement.

Why did it take so long to get PowerShell?

Maybe because Microsoft under Bill Gates retained the vision that BASIC should remain the core language for Windows scripting and administration …

Basic scripting driving COM automation

Today the office applications still have, underlying BASIC scripting that can be controlled via COM automation. To keep batches consistent with this, the tasks done by batch scripting were to be done by Visual Basic for Applications: VBA. This was supplied with the operating system to drive all automation tasks. Wang’s Office system, similarly, was automated and scripted via Cobol!

Language -driven development and divergence

Over time each office application developed a slightly different incompatible dialect and could not be kept in sync. Visual Basic was inadequate for the task and evolved into vb.net, a somewhat comical dialect of Java. It proved to be unpopular.. VBA was never quite consistent with the Visual Basic used for building applications.

Windows Script Host

Windows Script Host was introduced as an automation and administration tool to provide automation technology, primarily for Visual Basic and JavaScript. it supported several interpretive languages such BASIC, Perl, Ruby, Tcl, JavaScript, Delphi and Python. Initially, it had security loopholes finally solved with digital signing in Windows XP. It is still installed with MS Windows and still provides a number of useful COM interfaces that can be accessed in PowerShell and any other application that can interact with COM.

Windows Script Host was, designed before .NET so it is not able to directly use the .NET library. It also does use WMI, WS-Management and CIM for administration and monitoring. It focused on manage the platform by using very low level abstractions such as complex object models, schema, and APIs. Although it was useful for systems programming it was almost unusable for the typical small, simple and incremental task that is at the heart of administration, which needs very high levels of abstraction.

Microsoft competes in the server market

Microsoft was focused on the desktop market for a long time, so maybe did not realize the scale of the problem to compete in the server market. The GUI-centric Microsoft culture and ecosystem, idea was that all configuration was a point-and-click affair. OK for one or two servers, but not so easy or error free for a server-room.

PowerShell

Due to the determination and persuasive powers of Jeffrey Snover, Microsoft belatedly woke up to the fact that it hadn’t a viable solution for the administration of a number of servers in a medium sized company.
The GUI didn’t scale, and the batch system of the command line, though useful, was stuck in mid-eighties time-warp.

Microsoft had to replace the command line; so it needed all the things it and other interactive shells had, such as aliases, wildcard matching, running groups of commands, conditional running of groups of commands and editing previous commands.
It also had to replace VBA, and to integrate easily with Windows Management Objects.
It had to take over the role of VBA embedded in applications to make automation easier.

Microsoft needed something that looked both backwards and forwards, i.e an industry standard shell backward compatible with the command Line.

PowerShell started with the POSIX standard shell of IEEE Specification 1003.2, the Korn Shell, which is also available in Windows. However, this dealt only with strings, so it had to be altered to also deal with objects so that it could access WMI, WS-Management, CIM and COM. Because it needed so much connectivity and data interchange, it had to be able to use the .NET library to process NET objects and datatypes.

So a new system also needed to understand .NET to utilize the man-years of work of providing a common object model able to describe itself, and that can be manipulated without converting either to or from text. The new scripting system had to be resolutely object-oriented.

So the new PowerShell needed the ability to use any .NET object or value.

PowerShell, was given an intuitive naming convention based on the verb-noun pair, with simple conventions such as ‘Get’ to get an object and a noun describing the object.

To replace the command line Powershell had to be better. The whole point of a command shell is that it must be convenient to type short commands into it e.g. like ‘REPL’ in Python. Powershell also needs to work with existing terse DOS command-line commands so that an expert can type in very truncated commands.

PowerShell was also to be used in scripts stored on disk and repeatedly invoked, with just a change in parameters. This also meant that it had to be easy to read, with intuitive commands and obvious program flow.

It wasn’t an easy compromise, but it was done by means of aliases. Aliases also helped to ‘transition’ users from the other shells they were using to PowerShell (For CMD.EXE it is dir, type, copy etc, for UNIX ls, cat, cp etc.) You can even define your own in Power Shall!

Powershell took an idea from.NET everything should be learnable by discovery, without needing documentation. All the objects and Cmdlets in Powershell are self-documenting in that you can use PowerShell to find out what they do, what functions can be called, and parameters.

Why is the PowerShell Pipeline Important?
The pipeline in PowerShell inherited the concept of a pipe from UNIX. The PowerShell team had to solve the problem of dealing with Windows Management Objects and Instrumentation by passing objects rather than text down the pipe.

Having done so, it found itself in possession of a radical and extraordinarily useful system. It had the means of processing objects as though they were on a conveyor belt, with the means of selecting and manipulating each one as it passed down the pipeline.

This made the code easier to understand and also helped with memory management. A long file could be passed down a pipeline, line-by line, for example, searching for text, instead of having to read the entire file into memory (you can do that too if you want, and if you have no fear of the large object stack; you have to do it if you want, for example, to order the lines). . It also meant you needed only one cmdlets for selecting things, one for sorting, one for grouping , and only one for listing things out in a table. PowerShell could do a lot in a line of code, far, far, more than C# could.

Suddenly, the task of tackling the huge range of data on the average server that one might need to know about was less frightening. It was already there, and was now easy to get at and filter.

Why is PowerShell useful?
Scripts don’t require special components.

PowerShell now has all the power of a compiled .NET language. Automating processes using the windows command line needed many existing command files to determine settings and to configure. This meant that, the developer often had to write components in a compiled language. In developing scripts, part of the time was spent making small commands. This isn’t necessary in PowerShell thanks to .NET.

PowerShell simplifies the management of hierarchical data stores. Through its provider model, PowerShell lets you manage data stores such as the registry, or a group of SQL Servers using the same techniques of specifying and navigating paths that you already use to manage files and folders.

This doesn’t turn PowerShell into a rival to C#, VB-Net, ActionScript or F#.

It is not for developing applications but for automating administrative tasks.

It is theoretically possible to write a webserver in PowerShell, or an interactive GUI using Windows Presentation Foundation but that is not its purpose.

What is PowerShell’s main use?

Traditionally, the command line was used for complex deployments. BPowerShell can work remotely on any computer in the domain, and give far more information about the computer. It quickly became the default means of deployment for Windows.

This is great for the developer. He develops his package in NuGet and can use Chocolatey to deploy it. Linux allows you to install a package with just one simple command. Chocolatey does the same, but also allows you to update and uninstall the package just as easily. A simple script can grab the latest source from Git, compile it, install it and any dependencies, and do any special configuration tasks. There are 4,511 packages you can install from the Chocolatey site. PowerShell now has its own package manager but the current incarnation isn’t as versatile as Chocolatey.

Server Administration.

The release of PowerShell was most welcomed by the server teams.
The Microsoft Exchange Server team were early adopters and used PowerShell to allow the administration of Exchange.
The Microsoft SQL Server team, and Active Directory team also followed suit.
These teams provided specialized Applets that covered all aspects of the administration of the server.

Windows Server now has the capabilities of using Hyper-V to provide a ‘private cloud’ which allows companies to allow a degree of ‘self-service’ for server resources – all driven and maintained by PowerShell

Provisioning.

Provisioning is one of the areas where PowerShell excels.

PowerShell’s DSC package allows a PowerShell script to specify the configuration of the machine being provisioned, using a declarative model in a simple standard way that is easy to maintain and to understand.

It can either ‘push’ the configuration to the machine being provisioned, or get the machine to ‘pull’ the configuration.

Chocolatey, a PowerShell script, can not only install a large range of software, but also update it or remove it.

PowerShell has a built-in system called ‘PackageManagement’ that isn’t so versatile, but which allows you to install packages from a wider variety of sources.

Use PowerShell within an application or website
As well as providing a scripting environment, PowerShell can be embedded into an application by using System.Management,Automation , so that the user of the application can extend it via scripts. You can even do this in ASP.NET

Parallelism and workflow
Although PowerShell is an interpreted dynamic language (using .NET’s DLR) , its performance is enhanced by its ability to run parallel processes and to be able to run asynchronously. It is also designed to be able to run securely on other machines, remotely, and pass data between them. All this is possible without Workflow.

Scripted processes that have complex interdependencies, need to be interruptable and robust, and that is supported by PowerShell workflow.

Workflow can be complicated, and will always be a niche technique for scripting. . it is now able to run complex workflows within a domain thereby making it possible to script even the most difficult of business processes that contain long-running tasks that require persistence and need to survive restarts and interruptions.

PowerShell uses the Windows Workflow Foundation (WF) engine. A PowerShell workflow involves the PowerShell runtime compiling the script into Extensible Application Markup Language (XAML) and submitting this XAML document to the local computer’s Workflow Foundation engine for processing.

PowerShell Workflow scripting is particularly useful in high availability environments for processes such as ETL (data Extraction, Transform and Load), that potentially requiring throttling and connection pooling, and it is ideal where data must come from a number of sources and be load in a certain order.

SQL 2016 Sp1- this is a big deal – Synergy Software Systems

November 19th, 2016

In addition to a consistent programmability experience across all editions.

SQL Server 2016 SP1 also introduces all the supportability and diagnostics improvements first introduced in SQL 2014 SP2, as well as new improvements and fixes centered around performance, supportability, programmability and diagnostics based on the learnings and feedback from customers and SQL community.

SQL Server 2016 SP1 also includes all the fixes up to SQL Server 2016 RTM CU3 including Security Update MS16–136.

SQL editions have traditionally been differentiated by features- this meant that essential features for day to day database use were not present in express or standard versions. Our view is that this is not desirable and that ther is core set of features needed in all editions, and that differentiation should be more about hardware size and resource supported.

Well Sql 2016 sp1 now brings us close to that wish so its a really big deal for the SMB and mid market customer.

Once you have an application using SQL Server 2016 Standard Edition, you can just do an Edition Upgrade to Enterprise Edition to get even more scalability and performance, and take advantage of the higher license limits in Enterprise Edition. You will also get the intrinsic performance benefits that are present in Enterprise Edition.

The table compares the list of features which were only available in Enterprise edition, which are now enabled in Standard, Web, Express, and LocalDB editions with SQL Server 2016 SP1. This consistent programmatically surface area allows developers and ISVs to develop and build applications leveraging the following features which can be deployed against any edition of SQL Server installed in the customer environmen

This is a bold move by Microsoft, and should increase Standard sales, and customer satisfaction, without cannibalizing Enterprise sales. Standard Edition customers can use these features both to consolidate their codebases and, in many scenarios, build solutions that offer better performance.

There are many of new features available across all editions of SP1.
There still differences in Enterprise:

Availability features like: online operations, piecemeal restore, and fully functional Availability Groups (e.g. read-only replicas) are still Enterprise only.

Performance features like parallelism still don’t work in Express Edition (or LocalDB).

Automatic indexed view usage without NOEXPAND hints, and high-end features like hot-add memory/CPU, will continue to be available only in Enterprise.

Operational features like: Resource Governor, Extensible Key Management (EKM), and Transparent Data Encryption will remain Enterprise Edition only.

Others, like Backup Encryption, Backup Compression, and Buffer Pool Extension, will continue to work in Standard, but will still not function in Express.

SQL Server Agent is still unavailable in Express and LocalDB. As a result, , Change Data Capture will not work. Cross-server Service Broker also remains unavailable in these editions.

In-Memory OLTP and PolyBase are supported in Express, but ere unavailable in LocalDB.

Virtualization Rights haven’t changed and are still much more valuable in Enterprise Edition with Software Assurance.

Resource limits on the lower level editions remain the same. The upper memory limit in Standard Edition, is still 128 GB (while Enterprise Edition is now 24 TB).

I feel that Standard Edition is expensive enough that its memory limits should never be so dangerously close to the upper bound of a well-equipped laptop and maybe we should expect the limit to increase at least with each new version. If you when you are on Standard Edition and scale is required, then you can now use many Enterprise features across multiple Standard Edition boxes or instances, instead of trying to scale up.

All the newly introduced Trace flags with SQL Server 2016 SP1 are documented and can be found at http://aka.ms/traceflags.

SP1 contains a roll-up of solutions provided in SQL Server 2016 cumulative updates up to and including the latest Cumulative Update – CU3 and Security Update MS16–136 released on November 8th, 2016. Therefore, there is no reason to wait for SP1 CU1 to ‘catch–up‘ with SQL Server 2016 CU3 content.

The SQL Server 2016 SP1 installation may require reboot post installation

Microsoft Dynamics 365 now available in the U.A.E. – ask Synergy Software Systems

November 1st, 2016

Microsoft Dynamics 365 is a suite of cloud services to help companies to accelerate their digital transformation with purpose-built apps to address specific business needs.

Dynamics 365 unifies CRM and ERP functions into applications that work smoothly together across all divisions: sales, customer service, field service, operations, financials, marketing, and project service automation. These apps can be easily and independently deployed and scaled on demand.

Start with what you need the most.

All apps are delivered through easy-to-use, mobile experiences and feature offline capabilities.  

Users can rely on Power BI, Cortana Intelligence and Azure IoT functions which are natively embedded.

In addition to that, Dynamics 365 and Office 365 are deeply integrated.  Since Dynamics 365 uses a new common data servicel, customers can extend functionality and build custom apps using PowerApps, Microsoft Flow (News: PowerApps and Flow available) as well as professional developer solutions.

To find out more call us now on 00971 43365589.