I left EMC’s recent CIO Summit in Singapore thinking about Big Data and race cars.During the Summit, Michael Taylor, CIO of Lotus F1 Racing, referenced that their car has more than 150 sensors that capture 25MBs per lap and 50GBs of data per race that can be analyzed to fine tune the car for the next race. It’s absolutely incredible and is turning race cars into mobile R&D centers.While I may not drive a race car, I am excited about how the proliferation of sensors and Internet of Everything will benefit us personally and professionally in the future. And, as a CIO, it is also a reminder that we must take a more contemporary approach to IT to unlock the potential of this information.We need to devise a way to capture and manage this big, fast data and provide a platform for our users to analyze this information in real-time. At EMC, we are creating a ubiquitous data lake where we could ingest a large amount of data, and then put the intelligence on top of it. This goes beyond just visualizing the data to provide our business users with the ability to play with the data and change the variables to drive different outcomes and different behaviors.However, as CIOs of contemporary IT organizations, we cannot focus just on the technology. To paraphrase one CIO’s comment at the Summit, a technology approach like this enables us to stop thinking about delivering products and solutions, and begin to focus on how we can help our business users achieve value-driven outcomes. While technology is critical, contemporary IT requires that we take a long, hard look at our people and processes and evolve to organization to be business-facing, service-oriented, consumption-funded and, most importantly, value-driven.Which brings me back to Michael’s session at our Summit. Capturing all that data is important, but only if it helps Lotus F1 get faster, more agile and more competitive after each and every race. How are you contemporizing IT to supercharge your business?
As this summer’s world-class Olympic athletes demonstrated, being successful in a fiercely competitive environment means constantly upping your game. Success is about pushing the limits of what is expected while always remaining strategic. Similarly, we’re preparing our hybrid cloud customers to grasp and maintain their competitive advantage.The EMC Enterprise Hybrid Cloud was designed to increase agility for traditional applications, freeing up resources to invest in innovation. For customers looking to develop cloud-native applications and a DevOps strategy, EMC Native Hybrid Cloud provides the agility they need without risk. Customers can invest time in driving business value as EMC continues to develop and deliver innovations into our cloud platforms.Today, we are announcing several Enterprise Hybrid Cloud and Native Hybrid Cloud enhancements …Enterprise Hybrid Cloud has increased data center support by 2XPhysically distributed data centers can now be centrally managed using a single self-service catalog. The ability to leverage up to 4 vCenters across 4 sites seamlessly brings increased responsiveness to the business.Optimize Data Protection on Enterprise Hybrid CloudJust like every Olympian needs a training strategy that is specific to their individual needs, not all applications have the same requirements. Workloads evolve, and therefore, adjusting the level of protection should be simple. For those using Avamar or Data Domain, backup service levels can be modified at any time throughout the workload lifecycle. New support for anytime Virtual Machine Encryption with CloudLink and RecoverPoint for Virtual Machines provides granular data protection down to the individual workload or virtual machine. Services can be added, deleted or modified as-needed basis.Streamline the Delivery of Enterprise Hybrid Cloud Apps and Infrastructure ServicesFor new customers, the next version of the platform is targeted for later this year. These enhancements are planned to include new workflows that automate application to infrastructure provisioning to reduce the time and complexity of delivering new services.Start small and rapidly scale with Native Hybrid CloudAt the end of the third quarter of 2016, EMC plans to offer a new Native Hybrid Cloud option based on the VCE VxRail 200 and 200F models. This was designed for companies that want to get their feet wet with a DevOps strategy and expand as demand for new apps grow. Implementing a new cloud-native platform doesn’t mean organizations have to start from scratch. IT can leverage existing investments and knowledge of vSphere and vSAN technology to proceed with confidence and minimal risk.EMC Global Services deployment helps to eliminate risks and accelerates time to value through a comprehensive portfolio of services to help speed adoption including newly enhanced Implementation Services. Operating model services help customers define and build the roles and processes to evolve from a technology-siloed IT organization to a services-oriented operating model.Many Olympic athletes believe, “you can’t put a limit on anything.” At EMC, we’re removing those limitations.To see our platforms in action, visit either of our demo sites; ehcdemo.com or nhcdemo.com. Or, learn more on our webpage and follow @emccloud on Twitter.
With All-Flash storage systems the expectation of predictable performance is a given. So, if predictable performance is a given then what sets one All-Flash array apart from others?The answer is DATA SERVICES!Data services are what make today’s All-Flash storage intelligent and add the unique capabilities required for the new cloud era.So what exactly are data services in the context of All-Flash storage?Data services provide functionality, above and beyond storing data which help simplify, optimize, protect and at the end of the day get more from your storage investment. Quick examples of data services include snap copies, quality of service, remote replication, intelligent caching, data reduction, encryption and many more…So why don’t all storage systems offer all possible data services? It comes down to design and architecture. Developing, testing and supporting data services, especially at the tier-1 mission critical level, is no small effort and requires a long term commitment and vast engineering resources. Also, running data services within a storage array requires system resources such as CPU and memory, very valuable commodities within today’s storage systems. If there aren’t enough resources available to run multiple data services then things like predictable performance can be impacted.Dell EMC offers a portfolio of All-Flash storage systems to meet a range of use cases and customer requirements. Each product has a unique design and architecture to meet a specific range of requirements and price points. We understand, for example, that there is a difference between what you can expect from a dual controller architecture (like our industry leading mid-range Dell EMC Unity product line) compared to a multi-controller ‘scale out’ architecture (like our industry leading tier-1 Dell EMC VMAX and XtremIO product lines). Both certainly play a key role in satisfying our customers’ varying requirements but both also offer their own range of data services based on their architectural design.What happens when you try to run too many data services on an architecture not designed or proven to be able to handle them? Simple – you run out of resources (like CPU and memory) and something has to give.One example of where we believe a storage vendor may be trying to get too much out of their architecture is Pure Storage and their FlashArray product line.If you have seen the list of data services Pure Storage recently announced (many of which are not yet available) a few questions come to mind:Can their FlashArray dual-controller architecture handle running everything they announced while maintaining predictable performance?How will performance tradeoffs be managed?Will they really be able to execute on their committed timeline?As mentioned earlier, it is data services that set one storage system apart from another so we understand why Pure Storage is trying to pack their FlashArray with all the basic data services they were missing, some of which customers have been waiting on for a while. But, when you look at the architecture of their FlashArray product, and when you take into consideration the FlashArray already has to throttle back on data reduction when the system gets busy to maintain performance, we think it seems unlikely it can handle running even more data services in parallel. How will these additional data services get enough resources to operate without impacting performance and/or other data services already running?Key Questions to Ask Pure Storage:Is FlashArray now utilizing resources from both controllers (front and back end) to try and provide more resources for data services? If so how will this impact controller failovers and/or upgrades when one controller goes offline?Will there be best practices for deploying data services without impacting each other or overall performance?Can you leverage QoS to make sure performance of critical data services (like remote replication) are not affected by other data services absorbing resources?Will you have to choose between performance and data services based on which, and how many, data services you want to run?To use an automobile analogy – the Ford Fusion (4 cylinder, 5 passenger car) and Ford Explorer (8 cylinder, 7 passenger SUV) are both consistently best sellers but they have completely different designs and serve different markets. No matter how much you dress up a Ford Fusion to look like a Ford Explorer it still has the engine and body of a Ford Fusion. Moral of the story – if you want to offer a bigger and more powerful solution you need to design one from the ground up.It will be interesting to see how things play out. Let us know what you hear!Want to learn more from our ongoing blog series, check out these recent blogs:NVMe – the Yellow Brick Road to New Levels of PerformanceScale Out or Sputter Out? Why Every All-Flash NAS Platform Isn’t Created EqualMission Critical Is More Than Just a Buzzword
This blog is the second in a three-part series written for National Cybersecurity Awareness Month. [previous post and final post]We live in a world centered around 24/7 connectivity, making cybersecurity a 24/7 concern.This is receiving special attention throughout the month of October, as the tech community recognizes National Cybersecurity Awareness Month by spotlighting cybersecurity issues and hosting public discussions about the latest tools, threats and trends affecting consumers and businesses alike.The theme of this year’s National Cybersecurity Awareness Month is “Our Shared Responsibility,” and true to theme, Dell teamed up with the National Cyber Security Alliance and Nasdaq to sponsor their cybersecurity summit in New York City.Photography by Kelsey Ayres / Nasdaq, Inc.The summit, held at Nasdaq headquarters, brought together some of the most influential leaders in the tech and cybersecurity space to discuss how today’s interconnected world is changing our society and the risks that come along with those changes. Panelists talked about how emerging technologies like artificial intelligence and machine learning will both drive new vulnerabilities and help solve them. I was happy for the opportunity to be a part of the event.I took part in the panel “Securing Breakthrough Technologies – The Next Five Years.” The panelists and I discussed how refinement of breakthrough technologies like artificial intelligence and machine learning will play an important role in the advancement of cybersecurity techniques and technologies.The main consensus was that artificial intelligence and machine learning is needed to analyze the billions of security events we receive daily, filter out the noise, identify what’s safe and not safe and provide quality information for security professionals to examine. With the volume of data that’s being produced in organizations, matched by the volume of threats, IT professionals today need this advanced technology to stay ahead.Later in the afternoon, I joined a panel with representatives from Cylance, Nutanix, and PhishMe for a more in-depth discussion on artificial intelligence. The panel, “Artificial Intelligence – Friend or Foe?” further explored how innovation and the proliferation of connected devices is providing new attack vectors and a lucrative market for cybercriminals. On the other hand, the data from these devices can provide a plethora of insight to strengthen machine learning and help humans do their jobs better and more efficiently.In the panel, I highlighted that there isn’t an area of security at Dell that isn’t using some form of artificial intelligence to help them do their jobs better. In the area of advanced threat prevention, AI today can predict the malicious intent of a piece of software, and detect anomalous behavior with more advanced security information and event management (SIEM) products to generate indications of a compromise or attack.Looking ahead, some of the big opportunities with AI lie in further advancements in generating valuable insights from security events, contextual access controls and data classification. Combining the sensitivity of the data itself with context about who is accessing it, where, how and on what device will be key to further protecting data from malicious activity and insider threats. In addition, there is an opportunity to better automate response to threats. Today, we manually address security issues as they happen. The next step is to be able to analyze an event or piece of information, decide on the response and automate that, in order to speed time to resolution and free up IT and security professionals to focus on what’s important.In the realm of cybersecurity, using AI and machine learning is in its infancy and we’ve only just scratched the surface of what’s possible. Taking advantage of advanced technology solutions and modernizing our security infrastructure will help us to protect our data and prevent threats while still allowing employees to be productive.
VANCOUVER, British Columbia (AP) — A Canadian judge is declining to ease bail conditions for a senior executive of Chinese tech giant Huawei who was arrested in Canada on a U.S. extradition warrant. British Columbia Supreme Court Justice William Ehrcke said Friday the current restrictions are the minimum required to ensure Meng Wanzhou, the daughter of Huawei’s founder and its chief financial officer, does not flee Canada. The judge dismissed Meng’s application for changes to her bail conditions, which would have allowed her to leave her Vancouver mansion outside the hours of her overnight curfew without the presence of security.