Sunday, June 19, 2016

System Integration Morphs To Cloud Service Integration




Cloud Service Brokerage is changing from an industry footnote toward becoming a major system integration play.  This role has now become a crucial component of a cloud computing transition because they help organizations aggregate multiple cloud services, integrate services with in-house applications, and customize these services to better meet customer needs. CSBs also help by consulting and recommending the best fit cloud services according to business requirements and goals. Cloud brokers may also be granted rights to negotiate with different service providers on behalf of their customers. This transformation is driven by the rapid rise of cloud computing, which has risen from under $6B in 2008 to a point where the market is expected to almost reach $160B in 2020. The global Cloud Service Brokerage Market itself is expected to grow from $5.24 Billion in 2015 to $19.16 Billion by 2020. 



Since CSBs merge the functions of reseller, systems integrator and independent software vendor (ISVs) into a convenient service delivery model, they deliver solutions by aggregating cloud services sourced from multiple cloud service providers. They can also customize those services to meet unique business requirements. Although CSBs often deliver transactional cloud services, their real value lies in the unique ongoing operational support they provide.  Unlike financial or real estate brokers that typically end their customer relationship after the sale, CSBs:

  • Enable cloud service arbitrage based on cost, performance or operational need;
  • Help companies migrate operations to the cloud and assist with staff augmentation and training;
  • Provide cloud service auditing and SLA monitoring services;
  • Help in focusing and managing organizational cloud service demand;
  • Provided toolsets to assist in the migration and integration of enterprise applications; and
  • Help in change management and the selection and integration of other managed services.

 
By automating and operationalizing the governance of cloud services, CSBs can efficiently multi-source services and augment them with third party metering and monitoring. Using CSBs, organizations also accelerate their transition to hybrid IT models. This marketplace is typically segmented type of services: cloud brokerage and cloud brokerage enablement, wherein cloud brokerage enablement is further segmented into internal and external brokers. When used internally, cloud enablement platforms helps enterprises adopt the new hybrid IT and multi-sourced operating model. By building organic expertise, companies can personalize IT service consumption and unify IT service delivery through the use of a corporate self-service store, a dynamic service marketplace, and continuous delivery. This centralized, supply chain approach unifies the order, execution, and management of multi-sourced solutions across legacy and cloud resources, by centrally delegating and tracking execution.

Another important management capability they deliver is performance auditing. Cloud Service Provider (CSP) price/performance has been shown to vary as much as 1000 percent depending on time and location. High levels of variability have also been seen within the same CSP processing the exact same job. This also means that the cost for an enterprise to processing the exact same job in the cloud could vary by this much as well. 

Changes in instance types, pricing, performance over time and availability of services by location highlights the inadequacy of traditional benchmarking philosophies and processes. The use of “performance quotas” by service providers may also lead to operational cost increases. This generally happens if a customer meets a CSP-determined management quota and the performance of relevant instance is reduced. Active metering and monitoring by a CSB can help companies detect and avoid this hidden cost.

As the cloud service brokers market matures, they are destined to replace the traditional system integrator. Maturation of technologies and CSB offerings will also make this service a “must-have” for the foreseeable future.
 

 (This post was brought to you by IBM Global Technology Services. For more content like this, visit Point B and Beyond.)
 




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Thursday, June 16, 2016

Networking the Cloud for IoT – Pt 3 Cloud Network Systems Engineering

Dwight Bues & Kevin Jackson

(This is Part 3 of a three part series that addresses the need for a systems engineering approach to IoT and cloud network design.  Networking the Cloud for IoT - Pt. 1: IoT and the Government , Networking the Cloud for IoT - Pt. 2 Stressing the Cloud )


The Case For Cloud Network Systems Engineering

IoT networking requirements are vastly different from those supported by today’s cloud network. The processing and transport levels are multiple orders of magnitude higher than ever seen before. More importantly though, societal economic and the safety ramifications of making mistakes during this transition are off the scale. This is why system engineering of the cloud computing network is now an immediate global imperative.

System engineering has many different aspects, but it all starts with the question, “What do I really want to know?” This is the beginning of the CONOPS document referenced earlier. This document captures User Needs which are formal statements of what the user wants from the system. This CONOPS leads to Derived Requirements which, through an iterative process, are analyzed against a Target Architecture. Once a project is underway, methods of Integration are planned in order to provide Validation (did we build the right system?) and Verification (did we build the system right?) of the requirements. Further considerations for SE include: how to conduct Peer Reviews of a design (either Systems, Hardware, or Software), studying Defects, and establishing processes to ensure the Quality of the final product and Compliance with Standards.



While multiple sources indicate that the business world is investing heavily in the IoT, there are no indication that these investments are addressing the question of what does society really want to know in the IoT world. To ensure success, design formality is necessary, lest “IoT” become the latest retired buzzword. Dr. Juran, in Juran on Leadership for Quality, makes the point that quality improvement programs failed because leadership assigned vague goals and responsibilities, while failing to commit resources to staff projects and reward achievements. This caused TQM, 6 Sigma, and the like to be relegated to the “dustbin” of quality programs. Is it wise to relive this error in our transition to IoT?

Ten Steps of Design Rigor

Jay Thomas in the Embedded Magazine article “Software Standards 101: Tracing Code to Requirements,” opined that the embedded industry standard for making systems safe or secure include:


  • Performing a safety or security assessment;
  • Determining a target system failure rate;
  • Using the target system failure rate to determine the appropriate level of development rigor;
  • Using a formal requirements capture process;
  • Creating software that adheres to an appropriate coding standard;
  • Tracing all code back to their source requirements;
  • Developing all software and system test cases based on requirements;
  • Tracing test cases to requirements;
  • Using coverage analysis to test completeness against both requirements and code; and
  • For certification, collect and collate the process artifacts required to demonstrate that an appropriate level of rigor has been maintained.”

Using this model, security issues must be addressed through a multi-layered approach.  From a system engineering point of view, users must be forced to implement complex passwords and Public Key Infrastructure (PKI) certifications must be a minimum requirement for operating across the IoT network. The article, “How to protect Wearable Devices Against Cyberattacks,” in IEEE Roundup online magazine, postulated that, where there are devices with limited functionality, they can be linked to the user’s smartphone, which can act as a conduit for the device’s information, thus securing it from the outside world.  Most importantly of all, though, is ensuring that the proper amount of Systems Engineering design rigor has been exercised in the development process. This makes defects easier to find and much less costly than a multimillion-dollar security breach.

Although it would be simply impossible to implement this type of rigor globally across the cloud and its underlying network, embedded systems tenets could be applied to individual IoT projects. Since embedded systems also have a history of low development overhead, minimal memory or storage per unit, and cost-driven development cycles, a more rigorous IoT design process may save society from seeing a collapse of the cloud. In the past, this type of design rigor has paid off in successful, maintainable designs. Let’s therefore use what we’ve learned from the past to avoid a future that none of us want to see.

 

 
Dwight Bues, of Engility Corp., is a Georgia Tech Computer Engineer with 30+ years' experience in computer hardware, software, and systems and interface design. He has worked in Power Generation, Communications, RF, Command/Control, and Test Systems. Dwight is a Certified Scrum Master and teaches courses in Architecture, Requirements, and IVV&T. He is also a certified Boating Safety instructor with the Commonwealth of Virginia and the United States Power Squadrons. He is currently working several STEM projects, sponsoring teams for competitions in the Aerospace Industries Association’s (AIA) Team America Rocketry Challenge (TARC) and the Robotics Education and Competition Foundation’s, Vex Skyrise Robotics Challenge.


Kevin L. Jackson is a globally recognized cloud computing expert, a cloud computing and cybersecurity Thought Leader for Dell and IBM and Founder/Author of the award winning “Cloud Musings” blog. Mr. Jackson has also been recognized as a “Top 100 Cybersecurity Influencer and Brand” by Onalytica (2015), a Huffington Post “Top 100 Cloud Computing Experts on Twitter” (2013), a “Top 50 Cloud Computing Blogger for IT Integrators” by CRN (2015) and a “Top 5 Must Read Cloud Blog” by BMC Software (2015). His first book, “GovCloud: Cloud Computing for the Business of Government” was published by Government Training Inc. and released in March 2011. His next publication, “Practical Cloud Security: A Cross Industry View”, will be released by Taylor & Francis in the spring of 2016

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2015)



Saturday, June 11, 2016

Networking the Cloud for IoT - Pt. 2 Stressing the Cloud

Dwight Bues & Kevin Jackson

This is Part 2 of a three part series that addresses the need for a systems engineering approach to IoT and cloud network design. Part 1 is Networking the Cloud for IoT - Pt. 1: IoT and the Government.)


IoT: Unprecedented Stress on the Cloud and It’s Underlying Network

Karen Field, Penton Communications’ IoT Institute director, in her article “Start Small to Gain Big,” postulated an oil drilling platform with 30,000 sensors would generate about 1 Terabyte of data per day. She also stressed that only 1% of that data would likely be used. From a systems engineering point of view this data flow is multiplied by the trillions of other IoT sensors in the cloud, introducing unprecedented data processing and data transport stress. Industries and competing companies within those industries will also be forced to weigh the economic impact of paying for this transport and processing.

How will these parochial and business-centered decisions drive networking priorities across the cloud? Will all of the high-priority data get through? Will any data be lost? How will you know? If a piezo-electric sensor detects a crack in the drill pipe, will you get the notification, or will it get out-prioritized by the ambient air temperature reading that you get every 10 minutes? Every day, data gets delayed through the Internet and the results are not catastrophic. Tomorrow, though, a stock trade “trigger” could be delayed costing billions. Key economic indicators could be lost that could trigger large economic movements. As with today’s Internet, tomorrow’s IoT will need to ensure that the RIGHT data gets to its destination in a timely fashion.

Securing the IoT

Programming Research, in their white paper, “How IoT isMaking Security Imperative for all Embedded Software,” recommended that software developers should take a more careful approach to releasing new IoT products, “Security problems often stem from the need to accelerate development and bring new products to market ahead of the competition.  A majority of security vulnerabilities are a result of coding errors that go undetected in the development stage. CarnegieMellon’s Computer Emergency Response Team (CERT), in fact, found that 64% of vulnerabilities in the CERT National Vulnerability Database were the result of programming errors.”  The research firm also believes that software development organizations should incorporate coding standards such as CERT C and utilize the Common Weakness Enumeration (CWE) database.  Companies like Programming Research, Critical Software, or Jama Software offer tools to assist with static analysis of code against these standards.  Luckily, an increasing number of organizations are making adherence to these guidelines and standards a requirement for both internal development organizations and outsourced application development vendors.

Figure 1, from the TASC Institute “Peer Review” course, illustrates that software defects, although they are “facts of work” act like mines in a minefield.  Typical “Code and Test” methodologies effectively just clear a path through the minefield.  System overload, operator error, or race conditions could force the system off of the “cleared path” and into unexplored territory. This “unexplored territory” has, until recently, been the very place that commercial vendors installed their “back doors,” to enable the vendor to perform maintenance, collect metrics, or verify that the software is an authorized copy.  Commercial software vendors are now cracking down on these features because they represent security vulnerabilities that could be easily exploited by a hacker.



Figure 1: Defect Detection




 


Dwight Bues, of Engility Corp., is a Georgia Tech Computer Engineer with 30+ years' experience in computer hardware, software, and systems and interface design. He has worked in Power Generation, Communications, RF, Command/Control, and Test Systems. Dwight is a Certified Scrum Master and teaches courses in Architecture, Requirements, and IVV&T. He is also a certified Boating Safety instructor with the Commonwealth of Virginia and the United States Power Squadrons. He is currently working several STEM projects, sponsoring teams for competitions in the Aerospace Industries Association’s (AIA) Team America Rocketry Challenge (TARC) and the Robotics Education and Competition Foundation’s, Vex Skyrise Robotics Challenge.

Kevin L. Jackson is a globally recognized cloud computing expert, a cloud computing and cybersecurity Thought Leader for Dell and IBM and Founder/Author of the award winning “Cloud Musings” blog. Mr. Jackson has also been recognized as a “Top 100 Cybersecurity Influencer and Brand” by Onalytica (2015), a Huffington Post “Top 100 Cloud Computing Experts on Twitter” (2013), a “Top 50 Cloud Computing Blogger for IT Integrators” by CRN (2015) and a “Top 5 Must Read Cloud Blog” by BMC Software (2015). His first book, “GovCloud: Cloud Computing for the Business of Government” was published by Government Training Inc. and released in March 2011. His next publication, “Practical Cloud Security: A Cross Industry View”, will be released by Taylor & Francis in the spring of 2016
 

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)
 



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2015)



Tuesday, June 7, 2016

Networking the Cloud for IoT - Pt. 1: IoT and the Government


Dwight Bues & Kevin Jackson

This is Part 1 of a three part series that addresses the need for a systems engineering approach to IoT and cloud network design:



The “Internet of Things” depends on the “Cloud” for the processing and storage of data. The Cloud’s backbone, however, is the network. This logic train reveals the importance of professional rigor and solid System Engineering (SE) of the network.

Imagine a sea of sensors, put out in the field by multiple independent vendors. Complying with specifications only in an informal sense, these sensors are sending Terabytes of data to the cloud. The availability of this data to anyone globally is impressive enough. But even more amazing is the fact that anyone in the world can also develop an application or a powerful API to filter out the “nuggets” of valuable information.

This is why businesses everywhere are investing heavily into IoT’s promise. It also drives a real expectation that IoT will deliver cognition, an ability to acquire knowledge and understanding through “thought, experience, and the senses” to the Cloud. Dr. Dennis Curry, of Konica/Minolta, even hinted that “Cognition” at the IoT level is actually possible. As stated in his white paper Genius of Things, “…the real promise of the IoT [is] its potential to deliver such a leap of insight about the world around us. Only when this becomes a reality will we understand the true genius made possible by connecting many things together.”

IoT and the Government

Stuart Ravens of Ovum, in his White Paper, Understanding the IoT Opportunity: An Industry Perspective, postulated that a smart streetlight could send status updates to a central facility. If the streetlight failed, it would cause a work order to be generated, and if there were no replacements in stock, a Purchase Order to procure a replacement part could be automatically produced, and an install date, scheduled. Metrics could be generated on failure rates which could pre-position replacement parts or pre-schedule fix dates. While we recognize the “Nirvana” aspect of this “Smart Cities” technology, there are many pitfalls. This single item is likely one of thousands of items for which Municipal governments must accept maintenance responsibility for, singling out one item for maintenance is probably not a good idea. Other municipal government consumable procurements could include: Chlorine and Fluorine gas (and filters) for fresh water treatment, Ferric Sulfate for removing nitrates from effluent water recovered during sewage treatment, diesel fuel for backup generators, also gravel and asphalt for pavement repair. At some level, a multi-disciplined Engineering team needs to be employed to tease out the most important  needs.


To do this, a Concept of Operations (CONOPS) development process is leveraged to gather all of the varieties of goods that the government purchases into one procurement document. These items can then be prioritized by cost, operational values and other factors to address temporal response levels, public safety implications and maybe even political sensitivities. Only then could one even start to determine what would be bought, how, and by whom. This may be a simplistic example, but many Plant Managers could verify that this holistic approach is necessary to prevent the inevitable “whack-a-mole” effect that happens when cost is unilaterally driven down in one area, only to have it pop up in another. Ravens further states, “Most public sector organizations lack the skills and expertise to design public IoT infrastructures, placing greater reliance, and importance, on vendors… This model is inappropriate for cash-strapped public sector organizations, which as yet are unsure of the business case for IoT.”


Observing the IoT need from a national level, the U.S. Department of Homeland Security (DHS) recently awarded a contract, “to advance detection capability and security monitoring of networked systems, collectively known as the Internet of Things”.  The National Institute for Occupational Safety and Health (NIOSH) has also issued a Request for Information (RFI) for companies to answer what the IoT can do to support First Responders. In their response, the Center for Data Innovation, determined that there are both first- and second- tier sensors usable to emergency responders.  First-tier sensors could include: network-connected smoke and temperature sensors that could detect the location of a fire in a building, wearable sensors that could track the location/status of emergency response personnel at the scene, and accelerometers that could detect the structural compromise of a hospital building after an earthquake.  Second-tier sensors are detectors that are normally used for other purposes.  Satellites with Infrared cameras, for example, could be used to track wildfires to ensure the safety of ground crews, a customer’s cellphone could be used to detect and locate a car crash, and neighborhood air quality sensors could be used to track the spreading of a fire or toxic spill.  These can be very helpful, but are they secure from being spoofed by a IoT hacker?


In a 1970s episode of Hawaii Five-O, a group of would-be jewel thieves used their knowledge of the Tsunami warning system to fake a warning and evacuate Honolulu.  With this done, they would have been able to break into any jewelry store and freely take what they wanted.  Only when the authorities noticed that the first sign of a tsunami (a drastically receding tide) did not occur as predicted, were they able to find out that the warning was a fake.  While we hope that no one would use the IoT in this manner, such hope is not a prudent governmental policy option.  Governments need to ensure that the IoT used by First Responders is secure enough to prevent risk to property loss and human life.





 


Dwight Bues, of Engility Corp., is a Georgia Tech Computer Engineer with 30+ years' experience in computer hardware, software, and systems and interface design. He has worked in Power Generation, Communications, RF, Command/Control, and Test Systems. Dwight is a Certified Scrum Master and teaches courses in Architecture, Requirements, and IVV&T. He is also a certified Boating Safety instructor with the Commonwealth of Virginia and the United States Power Squadrons. He is currently working several STEM projects, sponsoring teams for competitions in the Aerospace Industries Association’s (AIA) Team America Rocketry Challenge (TARC) and the Robotics Education and Competition Foundation’s, Vex Skyrise Robotics Challenge.

Kevin L. Jackson is a globally recognized cloud computing expert, a cloud computing and cybersecurity Thought Leader for Dell and IBM and Founder/Author of the award winning “Cloud Musings” blog. Mr. Jackson has also been recognized as a “Top 100 Cybersecurity Influencer and Brand” by Onalytica (2015), a Huffington Post “Top 100 Cloud Computing Experts on Twitter” (2013), a “Top 50 Cloud Computing Blogger for IT Integrators” by CRN (2015) and a “Top 5 Must Read Cloud Blog” by BMC Software (2015). His first book, “GovCloud: Cloud Computing for the Business of Government” was published by Government Training Inc. and released in March 2011. His next publication, “Practical Cloud Security: A Cross Industry View”, will be released by Taylor & Francis in the spring of 2016


 ( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2015)