Sunday, July 30, 2017

The Game of Clouds 2017

The AWS Marketplace is growing at breakneck speed, with 40% more listings than last year! This and more insights were revealed when CloudEndure used their custom tool to quickly scan the over 6,000 products available on AWS Marketplace. The top offerings are highlighted in the image below but additional detail is available on their blog


"So whether you are a Stark, a Targaryen, or even a Lannister, the Game of Clouds map will help you attain the crown of AWS cloud computing perfection."




( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Friday, July 14, 2017

Managing Your Hybrid Cloud

Photo credit: Shutterstock

Runaway cloud computing cost may be causing an information technology industry crisis.  Expanding requirements, extended transition schedules and misleading marketplace hype have made “Transformation” a dirty word.  Questions about how to manage cost variances and deviations with assets and cost across different suppliers abound. A  recent Cloud Tech article explained that while public cloud offers considerable cost savings in comparison to private or on-premises based alternatives, there may also be significant hidden costs. Operational features like auto-scaling can cause costs to soar in line with demand for resources, making predicting costs difficult and budgeting even harder. There is also an acute need for a holistic and heterogeneous system that can track the costs of cloud services from the point of consumption (e.g., an application or business unit) down to the resources involved (e.g., storage or compute service).
Sitting at the apex of all of these issues is the CFO or corporate Vice President of Finance. As the key budget manager for most organizations, it is where many of the key financial decisions are made. This is also where the spectrum of IT cost responsibility extends from the pure financial analytics tasks of:
  • Optimization;
  • Forecasting and projection; and
  • Financial reporting
To the pedestrian but crucial accounting tasks like:
  • Show-backs and charge-backs
  • Charge reconciliation; and
  • Budgeting policy management
The most prevalent cause of these financial problems is a failure to keep track of virtual assets in the cloud.  Many companies have lost complete visibility and control of cloud computing cost simply because they failed to tag and track these assets.  Unfortunately, this error is typically realized after hundreds or even thousands of cloud based assets have been instantiated.
Experts have also outlined a five-step process that help enterprises bring control and governance to hybrid cloud IT cost.
Step 1: Establish governance thresholds and policies for services
Step 2: Access your cloud service provisioning accounts
Step 3: Track the costs of the services, including recurring and usage-based costs
Step 4: Enforce compliance on the costs and asset usage using the purpose built cost analytics engines; initiate and track changes
Step 5: Simulate and optimize the control and compliance actions and better control your costs
Managing spend and assets across hybrid clouds also requires the availability of actionable data. This will help the CFO focus on which assets are performing as expected and which are not. Predictive analytics and insight-based recommendations can also help to drive the prioritization of changes that can have the most effective impact.
These sort of challenges can certainly be acute but the solution for helping organizations gain control of these issues will typically include holistic hybrid cloud management. In fact, financial organizations are just now realizing their critical role in managing the operational expenditure model embraced by cloud computing. Services specifically designed to address the financial management aspects of cloud metering, billing, workload management and service provisioning policies are just now hitting the marketplace.
One of these leading financial management services is provided by IBM. Their newly launched Cost and Asset Management application helps companies address escalating cloud costs and complexity while offering guidance into the next steps of hybrid cloud transformation. Through the use of predictive analytics to monitor and provide recommendations on a single dashboard, this service can provide finance and IT on one system of reference for hybrid cloud governance. This particular service can establish and enforce governance control points using financial and technical policies. Its ability to easily combine asset tags with policies can help the CFO identify and respond to financial variances before they become problems. Through the innovative use of Watson Cognitive services, this particular application can tap into a years of IBM experience to offer recommendations using built-in advanced analytics and cognitive capabilities. Acting on these suggestions can streamline cloud usage, predict future trends and identify waste.
If your company is currently experiencing these digital transformation challenges, learn more about managing hybrid IT finances at ibm.biz/ExploreCloudBrokerage. Establishing a focus on cloud governance, cost and asset management is a truly essential step towards expanding the operational benefits of hybrid cloud.


This post was brought to you by IBM Global Technology Services. For more content like this, visit IT Biz Advisor.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Thursday, June 29, 2017

American Airlines Adopts Public Cloud Computing


Did you know that the reservations systems of the biggest carriers mostly run on a specialized IBM operating system known as Transaction Processing Facility (TPF). Designed by IBM in the 1960’s it was designed to process a large numbers of transactions quickly. Although IBM is still updating the code, the last major rewrite was about ten years ago. With all the major technologies changes since then, it’s clear that IBM has already accomplished a herculean task by keeping an application viable for over 50 years!

Just like Americas aging physical infrastructure, the airlines are suffering from years of minimal investment in their information technology. This critical failure has been highlighted by a number of newsworthy incidents including:

·         Delta, April 4, 2017 - Following storms that affected its Atlanta hub, Delta's crew-scheduling systems failed, causing days of operational issues for the airline. Buzzfeed reports that flight staff were left stranded and unable to log in to internal systems. There were reportedly hours-long wait times on the crew-scheduling phone system.

·         United, April 3, 2017 - A problem with a system used by pilots for data reporting and takeoff planning forced United to ground all flights departing from George Bush Intercontinental Airport in Houston for two hours. This is the third time that this system has been blamed for causing operational problems at United. Around 150 flights operated by United or its regionally flying partners out of IAH were delayed on the day, and about 30 were canceled, according to flightaware.com.

·         ExpressJet, March 20, 2017 - A system-wide outage at ExpressJet delayed flights it operates as Delta, United, and American Airlines for hours. The FAA issued a ground-stop at the airline's request, preventing its planes from taking off. On the day, it had 423 delays and 64 cancellations, about a third of its scheduled operations, according to flightaware.com.

·         JetBlue, Feb. 23, 2017 - An outage at JetBlue forced the airline to check in passengers manually in Ft. Lauderdale and Nassau. Passengers were unable to use mobile boarding passes and check-in kiosks

While these incidents can be scary, American Airlines has recently taken a major step towards avoiding such events by migrating a portion of its critical applications to the cloud. In a recent announcement the carrier said that it will be moving it’s its customer-facing mobile app and their global network of check-in kiosks to the IBM Cloud. In addition, other workloads and tools, such as the company’s Cargo customer website, will also be moved to there. In a parallel effort, all of these applications will be rewritten so that they can leverage the IBM Cloud Platform as a Service (PaaS). This will be done using a micro-services architecture, design thinking, agile methodology, DevOps, and lean development.

“In selecting the right cloud partner for American, we wanted to ensure the provider would be a champion of Cloud Foundry and open-source technologies so we don’t get locked down by proprietary solutions” said Daniel Henry, American’s Vice President Customer Technology and Enterprise Architecture. “We also wanted a partner that would offer us the agility to innovate at the organizational and process levels and have deep industry expertise with security at its core. We feel confident that IBM is the right long-term partner to not only provide the public cloud platform, but also enable our delivery transformation.”

This latest announcement demonstrates why cloud computing is the future of just about every industry.  The cost savings, operational improvements, data security and business agility delivered by cloud based According to Patrick Grubbs, IBM's vice president of travel and transportation, American Airlines will also be able to reduce cost by leveraging an inherent cloud computing ability of matching compute resources to the variable requirements that come from seasonal peaks.

This move by American Airline is sure to spur others towards a quicker adoption of cloud computing.  I look forward to the stampede.

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)
 



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Wednesday, May 31, 2017

Crisis Response Using Cloud Computing



Cloud computing is more than servers and storage. In a crisis situation it can actually be a lifesaver. BlackBerry, in fact, has just become the first cloud-based crisis communication service to receive a Federal Risk and Authorization Management Program (FedRAMP) authorization from the United States Government for its AtHoc Alert and AtHoc Connect services. If you’re not familiar with FedRAMP, it is a US government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. The Blackberry certification was sponsored by the US Federal Aviation Administration.

While you may not need a US Government certified solution in an emergency, your organization may really want to consider the benefits of cloud computing for crisis response. From a communications point of view, companies can use cloud based services to quickly and reliably send secure messages to all members of staff, individual employees or specific target groups of people. Smartphone location-mapping functions can also be easily installed and used. One advantage of using application-based software installed on an employee’s smartphone is that it can be switched off when an employee is in a safe-zone, providing a balance between staff privacy and protection. Location data can be invaluable and result in better coordination, a more effective response and faster deployment of resources to those employees deemed to be at risk. 


Using the cloud for secure two-way messaging enables simultaneous access to multiple contact paths which include SMS messaging, emails, VOIP calls, voice-to-text alerts and app notifications. Cloud-based platforms have an advantage over other forms of crisis communication tools because emergency notifications are not only sent out across all available channels and contact paths, but continue to be sent out until an acknowledgement is received from the recipient. Being able to send out notifications and receive responses, all within a few minutes, means businesses can rapidly gain visibility of an incident and react more efficiently to an unfolding situation. Wi-Fi Enabled devices can also be used to keep the communications lines open when more traditional routes are unusable.  


While you’re thinking about your corporation’s crisis response plans, don’t forget about the data. Accessing data through cloud-based services can prevent a rescue effort from turning into a recovery operation. Sources for this life-saving resource include:
  • Data exhaust - information that is passively collected along with the use of digital technology
  • Online activity - encompasses all types of social activity on the Internet such as email, social media and internet search activity
  • Sensing technologies – used mostly to gather information about social behavior and environmental conditions
  •  “Small Data” - data that is 'small' enough for human comprehension and is presented in a volume and format that makes it accessible, informative and actionable
  • Public-related data - census data, birth and death certificates, and other types of personal and socio-economic data
  • Crowd-sourced data - applications that actively involve a wide user base in order to solicit their knowledge about particular topics or events

Can the cloud be of assistance when you’re in a crisis? Cloud-enabled crisis/incident management service from IBM may be just what you need to protect your business. IBM Resiliency Communications as a Service is a high availability, cloud-enabled crisis/incident management service that protects your business by engaging the right people at the right time when an event occurs, through automated mission-critical communications. The service also integrates weather alerts powered by The Weather Company into incident management processes to provide the most accurate early warning of developing weather events and enable proactive response



This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Tuesday, May 30, 2017

Cloudy Thinking and Digital Transformation



(Originally posted on the Engility Corporation Blog)

There’s a lot to gain from cloud computing, but success requires a thoughtful and enterprise focused approach. Cloud computing decouples data and information from the infrastructure on which it lies. A process that is a LOT more involved than dragging some folders from your desktop to a shared drive.
Cloud computing as a mission transformation activity, not a technological one.
As an organization moves from local information hosting to the cloud, one of the most important challenges is addressing cloud computing as a mission transformation activity, not a technological one. Cloud computing isn't a new technology. It's a new way of consuming and provisioning information technology services. Adopting cloud computing means paralleling your mission processes, rethinking the economic models and abstracting your applications from the technology stack silos, which are currently the norm.

Interactions and dependencies between mission applications may be more important than the data or application itself.
One of the first lessons we learned supporting customers was that cloud migration shouldn't be planned as an application-by-application movement to a different hosting environment. Cloud adoption is an application portfolio activity. Interactions and dependencies between mission applications may be more important than the data or application itself. That's why upfront screening, analysis and digital infrastructure modeling are so critical. Boeing flew its Dreamliner aircraft designs on a computer before they started to build. Shouldn't we (and our customers) test future IT infrastructure on a computer before moving to the cloud? 




That is the digital transformation approach we recommend to our customers, and we have now built an entire methodology around it called Cloud ASCEND. We formed an alliance with a few select partners: Cloud Security Alliance, Burstorm, Sequoia and IBM. These companies bring tools, lessons and optimizations available from the commercial sector (the technical operations viewpoint). We blend those offerings with the experience we've gained actually transitioning applications to the cloud and the lessons we've learned in the DoD and intelligence community (the secure mission delivery and performance viewpoint).

We knew the Cloud ASCEND digital transformation methodology couldn’t be some static, one-size-fits-all approach we trot out for every customer challenge. Our methodology constantly evolves because the world is always advancing. This is an important realization that all organizations need to internalize. Cloud computing enables rapid employment of new mission processes. It lets mission owners deploy capabilities that they didn't know existed. Cloud ASCEND is agile because effectively delivering the mission requires an agile methodology. 

It lets mission owners deploy capabilities that they didn't know existed.
Getting ready to migrate to the cloud? Consider a digital transformation strategy that delivers information mobility, operational scalability and mission agility. These are the real benefits that make the process worth the effort. Organizations can apply a digital transformation methodology to determine when and how to get started, allowing them to reduce risk, reduce complexity and migrate with confidence. Cloud ASCEND enables a sort of future proofing because digital transformation means thinking today and doing tomorrow.




Cloud Musings



Wednesday, May 17, 2017

Blockchain Business Innovation


Is there more than bitcoin to blockchain?

Absolutely, because today’s blockchain is opening up a path towards the delivery of trusted online services.


To understand this statement, you need to see blockchain as more that it’s more famous bitcoin use case. As a fundamental digital tool, blockchain is a shared, immutable ledger for recording the history of transactions. If used in this fashion, it can enable transactional applications that can have embedded trust, accountability and transparency attributes. Instead of having a Bitcoin blockchain that is reliant on the exchange of cryptocurrencies with anonymous users on a public network, a Business blockchain can provide a permissioned network with known and verified identities. With this kind of transactional visibility, all activities within that network are observable and auditable by every network user. This end-to-end visibility, also known as shared ledgering, can also be linked to business rules and business logic that can drive and enforce trust, openness and integrity across that business network.  Application built, managed and supported through such an environment can now hold a verifiable pedigree with security built right in that can:
  • Prevent anyone - even root users and administrators - from taking control of a system;
  • Deny illicit attempts to change data or applications within the network; and
  • Block unauthorized data access by ensuring encryption keys can never be misappropriated.

From an industry vertical point of view, this approach can:
  • Give financial institutions an ability to settle securities in minutes instead of days;
  • Reduce manufacturer product recalls by sharing production logs with original equipment manufacturers (OEMs) and regulators; and
  • Help businesses of all types more closely manage the flow of goods and related payments with greater speed and less risk.
Innovators within just about any industry can build, run and manage their own business blockchain network. And even if the organization isn’t quite ready to do the heavy lifting, it can consume a blockchain service from companies like IBM.

Ready-made frameworks as also available from the Hyperledger Project, an open source collaborative effort created to advance cross-industry blockchain technologies. Available hyperledger business frameworks include:
  • Sawtooth - a modular platform for building, deploying, and running distributed ledgers that includes a consensus algorithm which targets large distributed validator populations with minimal resource consumption.
  • Iroha - a business blockchain framework designed to be for incorporation into infrastructural projects that require distributed ledger technology.
  • Fabric - a foundation for developing applications or solutions with a modular architecture that allows components, such as consensus and membership services, to be plug-and-play.
  • Burrow - a permissionable smart contract machine that provides a modular blockchain client with a permissioned smart contract interpreter built in part to the specification of the Ethereum Virtual Machine (EVM).

If you’re team is looking to innovate and take a leadership position within your industry, business blockchains may be the perfect enhancement for your business focused application.



This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Friday, May 5, 2017

How Quantum computing with DNA storage will affect your health



By Guest Contributor: 
Taran Volckhausen, Contributing Editor at Vector (http://www.indexer.me)

Moore's Law, which states that processing speeds will double every two years as we cram more and more silicon transistors onto chips, has been faltering since the early 2000s when the law started to run up against fundamental limitations presented by the laws of thermodynamicsWhile the chip industry, with Intel leading the charge, has found ways to sidestep the limitations up until now, many are now saying that despite the industry’s best efforts, the stunning gains in processor speeds will not be seen again by the simple application of Moore’s Law. In fact, there is evidence to show that we are reaching the plateau for the number of transistors that will fit on a single chip. Intel has even suggested silicon transistors can only keep getting smaller during the next five years.
As a result, Intel has resorted to other practices to improve processing speeds, such as adding multiple processing cores. However, these new methods are just a temporary solution because computing programs can benefit from multi-processors systems up until a certain point.



RIP Moore’s Law: Where do we go from here?

No doubt, the end of Moore’s Law will certainly present headaches in the immediate future for the technology sector. But is the death of Moore’s Law really all bad news? The fact the situation is stirring heightened interest in quantum computing and other “supercomputer” technology gives us reason to suggest otherwise. Quantum computers, for instance, do not rely on traditional bit processors to operate. Instead, quantum computers make use quantum bits, known as “qubits,” which is a two-state quantum-mechanical system that can process both 1s and 0s at the same time.

The advances in processing speeds made possible by quantum computing would make Moore’s Law look like a caveman’s stone tool. For instance, the Google-funded D-Wave quantum supercomputer is able to outperform traditional computers in processing speeds by a mind-blowing factor of 100-million. With the advantages offered by “quantum supremacy” easy to comprehend, the race is now on between tech-heavyweights such as Google, IBM, Microsoft and Intel to successfully prototype and release the first quantum computer for commercial use. However, due to the “weird” quantum mechanics the technology relies on, there are few barriers to working with and storing data derived from processing with qubits.

Brave new world: Quantum Computing with DNA-based Storage

Basically, the fundamentals of quantum mechanics don’t permit you to store information on the quantum-computing machine itself. While you could convert its data for storage on traditional devices, such as the solid-state hard drive, you would need to process a nearly infinite amount of information, which would require an impossible amount of space and energy to achieve. However, there could be a solution, but it requires us to look within. Not in a hippy-dippy “finding yourself” sort of way, but rather the double helix code found in in humans and almost all other organisms: DNA. For decades, researchers have toying around with using DNA as both a computing and a storage device. Recently, a team of researchers at Columbia University demonstrated that their coding strategy based on one strand of DNA could store 215 petabytes of information. "Performing sentiment analysis on quantum computing and DNA storage topics with Vector API, may uncover robust demand for these technologies in various industries such as healthcare." says Jo Fletcher Co-Founder Indexer.me.

What would supercomputers mean for health treatments?

The human body is an incredibly complex organism. While the markets have released many life-saving drugs, there are many barriers holding us back from realizing their maximum potential. Standard computing isn’t powerful enough to truly predict the ways a drug will react with an individual’s particular genetic composition and unique environmental factors. With quantum computing based on DNA storage, however, you would have the ability to examine pretty much any scenario imaginable by mapping a much more accurate prediction of the of any given drug’s interaction with a particular person based on their genetics and environment. With quantum computing, medical professionals will be able open a new chapter in drug prescription outcomes by tailoring each treatment to meet the exact requirements of each individual.

About Vector

Vector is a natural language processing application that performs information extraction on millions of news stories per day. It provides high value to any quantitative researcher, adding a collaborative-authoring workflow in perfect synergy with the most powerful and unique faceted search in the business. For more information, please visit www.indexer.me or jofletcher@indexer.me.

Useful Links

About Indexer

Indexer is a tech start-up in the artificial intelligence space and has a focus on computer vision and natural language processing technologies.

This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Friday, April 28, 2017

36 Shades of Hybrid IT

Photo credit: Shutterstock

Everyone has heard of the 50 Shades of Grey. But do you know the “36 Shades of Hybrid IT”? These shades are a new way of describing the 36 point solutions across a hybrid IT environment. Enterprises looking to transform the way information technology is leveraged should evaluate their options by analyzing a transition across three specific high-level domains and their relevant sub-domains, namely:
  • IT Implementation Model
    • Traditional
    • Managed Service Provider
    • Cloud Service Provider
  • Technology Service Model
    • Infrastructure-as-a-Service
    • Platform-as-a-Service
    • Software-as-a-Service
  • Deployment Model
    • Private
    • Hybrid
    • Community
    • Public

Combinatorically (3 IT Models x 3 Service Models x 4 Deployment Models = 36 options) these components are used to identify the “36 Shades of Hybrid IT”. These domains and sub-domain outline a structured decision process that aims to place the right workload into the most appropriate IT environment.  It is also important to note that this is not a static decision. As business goals, technology options and economic models changes, the relative value of these combinations to your organization may change as well. Another critical truth is that the single point solutions identified by this model are rarely sufficient to meet all enterprise needs so a mix of two, three or as many as 10 variations of these specific point solutions may be required.  This is why hybrid IT and cloud service brokerage are such important skillsets to the modern information technology team.

The IT Implementation domain addresses, at a high level, the three implementation strategy options most companies look at when digital transformation is the goal:
  • Continue a status quo strategy that uses a traditional enterprise data center to address requirements;
  • Select and contract with a managed service provider (MSP) by running a traditional acquisition that dictates requirements and operational processes through the RFP/bid process; or
  • Satisfy requirements through the use of standard offerings from one (or more) cloud service provider (CSP).

The primary drivers in an implementation model selection is enforcement of enterprise IT governance processes (status quo and MSP option) or acceptance of CSP IT governance processes (CSP option). These choices are also strongly influenced by capital investment plans and long-term business model changes. Decisions within the Technology Service Model domain should typically reflect staff skillsets and training targets. IaaS reflects the broadest range of skillsets and training requirements.  It also delivers the greatest amount of flexibility and choice. The other end of the spectrum is represented by SaaS which demands the minimum level of technical staff but may also act as guardrails to your business processes and models. Overall control of data security and technology choices are reflected by deployment model preferences. In the Private model, the organization retains absolute control over all aspects of the information technology platform. Choosing this option, however, would also lead to the highest levels of capital and staffing investments. Public sits at the other extreme, requiring strategic alignment with the cloud service provider in exchange for lower capital and staffing investment requirements. Hybrid and Community deployment options lie between these extremes and usually offer unique operational and economic capabilities.

Your digital transformation team should discuss and debate what these “36 Shades of Hybrid IT” mean for your company’s future. The team should also avoid leaving these important decisions to opinions and guesswork. Comparisons and options should be considered using real data. This is where tools like IBM Cloud Brokerage can be important to your digital transformation efforts. Organizations should carefully consider which business applications should be migrated to which “shade”. While some apps run best on traditional, physical servers, others are cloud-ready, but need the security of private clouds or enterprise-grade public clouds. There are even other applications where lower-priced commodity clouds will prove to be a viable and money-saving option.  In addition to migration plans and cloud choices, the best hybrid IT strategies also take into account provisioning of the necessary migration skills and technology management capabilities. After deciding the target environment for each of your business processes, the team may still need to do some application re-architecting.

If you and your team are dealing with Digital Transformation, a cloud brokerage platform can help by using real data to profile your workloads. It will also enable data-driven decisions on best-fit architectures, technology choices and deployment models. In addition, these platforms aid organizations in designing production solutions and in estimating costs before transformation even begins.

This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.







Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Friday, April 14, 2017

Digital Transformation Driven by ITaaS

Photo credit: Shutterstock

When executing an effective digital transformation strategy, management is tasked with placing the right workload into the most appropriate IT environment. This represents a shift from buying parts for self-assembly to composing services through self-serve consumption and pay-per-use models.  Quite often this transition also leads to the adoption of software defined environments across the enterprise infrastructure.

Software defined infrastructures do, however, bring with them some very unique challenges. Many of the most prevalent issues are centered around the relatively immature state of the technology itself. The most significant aspect of this challenge is the lack of industry standards for device control. Control software must know the status of all network devices and trunks, no matter what vendor equipment is being used. While OpenFlow stands today as the de facto software defined networking standard, it is a unidirectional forwarding-table update protocol that cannot be used to determine device status. It also doesn’t allow for the programming of port or trunk interfaces. A second critical issue is the lack of business process or enterprise IT policy definition capabilities.  This shortfall often leads to resource over provisioning caused by automation rules that deploy “just in case” instead of “just in time”.

When taken together, the two latter problems heighten the risk of vendor lock-in. This issue was highlighted last year by Major General Sarah Zabel, Vice Director of the Defense Information Systems Agency.  This military organization deals with 2,400 trouble calls, 2,000 tickets, 22,000 changes, and 36 cybersecurity incidents every day. Its global network interfaces with owned and managed networks from other military departments and services providers. When addressing the Open Networking User Group, Major General Zabel stated that the agency suffered from vendor lock-in and too many devices.

“We need an area where vendors accept the fact we need a path away from their solution…We need less dependence on hardware and to be able to work with more software."
Another important but widely ignored challenge is the need to build organizational buy-in, a problem that is often accompanied by business process changes. According to Neal Secher, managing director and head of network architecture at BNY Mellon, "You need to partner with your business and show them the value. There's a snowball [effect] that will add value and allow you to add more automation. You need to prove through evidence that it works and won't hurt the business."

Understanding how to select, configure and operate within this new paradigm requires new technology, new technical skillsets and new management techniques. This trifecta of change cannot be easily assimilated within most large organizations. This is why IBM IT-as-a-Service (ITaaS) can often provide critical advice, assistance and technology.

ITaaS is an approach for defining and consuming digital services through a hybrid cloud infrastructure. This approach has often shown itself as the most cost effective path toward workload optimization. When used as part of a holistic strategy, hybrid cloud infrastructures can deliver multiple levels of value by:
  • Delivering programmable, virtualized and application-centric networking capability;
  • Managing the corporate mobile infrastructure and Bring-Your-Own-Device (BYOD) initiatives;
  • Modernizing and optimizing the IT security program for identity, application, data, network, and endpoint security in a way that manages risk and achieves compliance; and
  • Enabling a shift of executive focus from infrastructure maintenance towards the creation of innovative products and services.

Hybrid cloud environment alone, however, aren’t able to maximize the value of digital transformation.  To do that you may also need to consider cloud brokerage capability.  This tool can be used to plan, procure, govern and manage all IT services across all cloud models. To avoid vendor lock-in, this service can also be exercised across multiple IT service providers.


Software defined infrastructures can deliver infrastructure optimization and enhanced IT services at a reduced cost. Organizations that opt to take advantage of this new operational model should, however, seriously consider taking the ITaaS route.



This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Tuesday, April 4, 2017

IBM Interconnect 2017: Cloud, Cognitive and Data!

A couple of weeks ago while attending IBM Interconnect 2017 I had the awesome opportunity to participate in the IBM Interconnect 2017 Podcast Series with Dez Blanchfield. I not only got to pontificate on all things tech, but also had the honor of collaborating with some of the best minds in the business. The series is provided below in it entirety.

ENJOY!




This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)