July 28, 2015 § Leave a comment
Security remains one of the most common topics mentioned anytime a cloud strategy (especially managed off-site) is proposed. Each day the risks of leaking data which can become potentially harmful information in the wrong hands grows and organizations must be careful when deciding how to approach data security (the phrase “can’t outsource risk” comes to mind here). Unfortunately, because of this risk, there still seems to be some hesitancy to even consider cloud based environments.
Ultimately the answer in situations like this is very unique and depends on the organization’s security processes, but it should be known that cloud environments can aid in maintaining, if not exceeding, security practices.
In no particular order, here’s a brief list of how cloud environments can help with these discussions:
- Economies of Scale in Security- This may apply more to managed cloud service providers or SaaS providers, but is worth mentioning regardless. Cloud service providers (CSP) will often, but not always, have greater resources and expertise to dedicate to ensuring that their environments remain secure. This means that they are likely to not only design a secure environment to begin with, but also maintain this security as new threats and vulnerabilities are discovered. Security needs to become a core competency to a CSP because their customers are relying on them, and if they cannot get it right, they will quickly become obsolete. As an example, consider IBM’s X-Force Security Research and Development team which, among other things, conducts thousands of hours of research to “power preemptive protection delivered by IBM products.” This doesn’t mean that organizations cannot exceed these standards in house, but for most organizations, they simply do not maintain today’s changing security landscape among their core competencies. Finally, economies of scale can be used when obtaining certain certifications or accreditations which may be otherwise difficult to attain for individual organizations.
*See http://www-03.ibm.com/security/xforce/ for more information about IBM X-Force.
- Architectural Flexibility- Appropriate in all types of cloud discussions, by shifting the core operational needs offsite, organizations are free to design the environment of their dreams, including security requirements, without worrying about on-site constraints. For example, an organization can certainly encrypt their data and split it in a manner that makes it difficult for simply one file to be useful if obtained (i.e. data is encrypted and split into 2 blocks, even if decrypted, the information is only partial), but they may not be able to split it across multiple sites. With a cloud environment, their data could be split and located in different, geographically separate sites; this is beneficial if one site is compromised since the other site maintains the rest of the data. Another example is installing and managing specialized security appliances which may be otherwise prohibitive on site due to space and expertise shortages.
- Security Intangibles*- For a lack of a better category title, there are certain benefits provided by using a managed cloud off site either by a CSP or internal third party. By this I mean parties not associated with an on-site environment may often be in a better place to manage security risks especially in the case of business continuity. As an example, if a natural disaster were to strike a primary datacenter, employees on-site may be under obvious emotional duress which is better handled by someone not in the same vicinity. While a good cloud strategy focuses on automating processes such as this, it is an added risk to count on manual intervention in times of stress. In this case, moving the burden to a third party mitigates some risk. Invoking fear, uncertainty, and doubt is not an intention here, but it should be addressed by a comprehensive security policy (which may or may not include cloud).
As mentioned in the second paragraph, there is no single answer to guarantee a flawless security strategy, but cloud-based answers should not be disregarded. There are unique benefits that off site or CSP managed environments provide which can help mitigate certain types of risk when used as part of a greater solution set.
As more organizations adopt and get used to cloud, I’m hopeful that posts like this will become less relevant!
July 24, 2015 § Leave a comment
Recently, we have been working with a pharmaceutical company focused on a variety of transformational cloud initiatives including social network analysis. Currently, the focus is on physician targeting, social listening, and optimizing sales engagement for a specific group of products.
I have included a brief presentation illustrating how to merge sophisticated, predictive analytics with Twitter data using IBM Watson Analytics, which enables business professionals to immediately pull Twitter data into any project. By automating the steps of data curation, predictive analysis and visual storytelling, Watson Analytics can identify and explain hidden patterns and relationships, accelerating the understanding of why things happen and what’s likely to happen in the future. Enjoy!
July 22, 2015 § Leave a comment
Recently our team has been meetings with several pharmaceutical and medical device companies regarding IBM’s GxP Cloud offering. GxP refers to “Good Practices” in regulated industries including food, pharmaceutical, medical devices and cosmetics. In the pharmaceutical and medical device industries, these would include Good Laboratory Practice (GLP), Good Automated Manufacturing Practice (GAMP), Good Manufacturing Practice (GMP) Good Clinical Practice (GCP) and Good Clinical Data Management Practice (GCDMP). The ‘x’ is merely a placeholder.
The purpose of the GxP quality guidelines is to ensure a product is safe and meets its intended use. The most central aspects of GxP are:
- Traceability: the ability to reconstruct the development history of a drug or medical device.
- Accountability: the ability to resolve who has contributed what to the development and when.
- Documentation: this is a critical component for ensuring GxP adherence.
The pharmaceutical and medical device industries represent a tremendous (global) opportunity for the IBM GxP Cloud. In preparation for upcoming meetings, I wanted share the following details on Good Laboratory Practice (GLP), Good Automated Manufacturing Practice (GAMP), Good Manufacturing Practice (GMP) Good Clinical Practice (GCP) and Good Clinical Data Management Practice (GCDMP). I hope you find this information beneficial as you prepare to position the GxP Cloud offering with your customers.
Good laboratory practice (GLP) specifically refers to a quality system of management controls for research laboratories and organizations to try to ensure the uniformity, consistency, reliability, reproducibility, quality, and integrity of chemical (including pharmaceuticals) non-clinical safety tests; from physio-chemical properties through acute to chronic toxicity tests.
GLP applies to non-clinical studies conducted for the assessment of the safety or efficacy of chemicals (including pharmaceuticals) to man, animals and the environment.Good Laboratory Practice (GLP) embodies a set of principles that provides a framework within which laboratory studies are planned, performed, monitored, recorded, reported and archived. These studies are undertaken to generate data by which the hazards and risks to users, consumers and third parties, including the environment, can be assessed for pharmaceuticals (only preclinical studies), agrochemicals, cosmetics, food additives, feed additives and contaminants, novel foods, biocides, detergents etc. GLP helps assure regulatory authorities that the data submitted are a true reflection of the results obtained during the study and can therefore be relied upon when making risk/safety assessments.
Good Automated Manufacturing Practice (GAMP) is a set of guidelines for manufacturers and users of automated systems in the pharmaceutical industry. More specifically, The Good Automated Manufacturing Practice (GAMP) Guide for Validation of Automated Systems in Pharmaceutical Manufacture describes a set of principles and procedures that help ensure that pharmaceutical products have the required quality. One of the core principles of GAMP is that quality cannot be tested into a batch of product but must be built into each stage of the manufacturing process. As a result, GAMP covers all aspects of production; from the raw materials, facility and equipment to the training and hygiene of staff. Standard operating procedures (SOPs) are essential for processes that can affect the quality of the finished product.
Good manufacturing practices (GMP) are the practices required in order to conform to the guidelines recommended by agencies that control authorization and licensing for manufacture and sale of food, drug products, and active pharmaceutical products. These guidelines provide minimum requirements that a pharmaceutical or a food product manufacturer must meet to assure that the products are of high quality and do not pose any risk to the consumer or public. Good manufacturing practices, along with good laboratory practices and good clinical practices are overseen by regulatory agencies in the United States, Canada, Europe, China, and other countries.
Good clinical practice (GCP) is an international quality standard that is provided by ICH, an international body that defines standards, which governments can transpose into regulations for clinical trials involving human subjects. A similar guideline for clinical trials of medical devices is the international standard ISO 14155, that is valid in the European Union as a harmonized standard. These standards for clinical trials are sometimes referred to as ICH-GCP or ISO-GCP to differentiate between the two and the lowest grade of recommendation in clinical guidelines. GCP follows the International Conference on Harmonisation (ICH) of GCP guidelines. GCP enforces tight guidelines on ethical aspects of a clinical study. High standards are required in terms of comprehensive documentation for the clinical protocol, record keeping, training, and facilities, including computers and software. Quality assurance and inspections ensure that these standards are achieved. GCP aims to ensure that the studies are scientifically authentic and that the clinical properties of the investigational product are properly documented. Ongoing research shows that whether conducting research involving a new drug, a behavioral intervention, or an interview or survey, GCP provides investigators and their study teams with the tools to protect human subjects and collect quality data. GCP guidelines include protection of human rights for the subjects and volunteers in a clinical trial. It also provides assurance of the safety and efficacy of the newly developed compounds. GCP guidelines include standards on how clinical trials should be conducted, define the roles and responsibilities of clinical trial sponsors, clinical research investigators, and monitors.
Good clinical data management practice (GCDMP) is the current industry standards for clinical data management that consist of best business practice and acceptable regulatory standards. In all phases of clinical trials, clinical and laboratory information must be collected and converted to digital form for analysis and reporting purposes. The U.S. Food and Drug Administration and International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use have provided specific regulations and guidelines surrounding this component of the drug and device development process. The effective, efficient and regulatory-compliant management of clinical trial data is an essential component of drug and device development.
July 21, 2015 § 2 Comments
I read a great blog post on Docker over at Valdhaus. You can find it here. All credit goes to them. I thought there were some really great misconceptions that I want to highlight.
1. If I learn Docker I don’t have to learn the other systems stuff – Boy is this not true. Docker makes things very simple for the developer but not for the operations team. Managing large servers to run the Docker run-time and be able to tune the server to best be able to be a Docker host is still a job for the professional mechanic, not Tim the Toolman Taylor. Just like any other technology the “hello world” example is simple and easy but getting to running production workloads is a big leap. Insure you have good operational understanding of Docker before you get there.
2. You should only have one process per Docker container – This is another problem with the “Hello World” example. Proper crafting of Docker container content is a big key to success, not only at the runtime perspective but also at the Docker file level. Create proper layers in the Docker file is important. Levels of abstraction are usually game changers for any type of architecture and Docker is no exception. I like the mindset of creating Docker containers as a “role based virtual machine” used in the blog. Having the single process mindset caused the wrong level of abstraction in many cases. I also highly recommend reading the referenced blog “Microservices – Not a free lunch.”
3. If I use Docker I don’t need an orchestration tool (my edit) – the Valdhaus blog really talks about the need for orchestration and promotes the use of Ansible. Of course being from IBM I would encourage UrbanCode Deploy, but the point is well taken. Coordinating the deployment of containers and the associated networking is challenging enough, but you can do some really cool blue/green deployment strategies to achieve seamless zero downtime production deployments using an orchestration tool.
The rest of the Valdhaus blog is great but these 3 points were targeted at where I spend my time. The big benefits of Docker are achievable as long as you spend time insuring the systems underneath your Docker environment are well maintained. Developers love Docker. Operations teams can also love Docker as long as they understand how to manage it.
July 16, 2015 § Leave a comment
There is a lot of discussion these days related to the use of disk becoming outdated and the need to shift to memory as a prime storage mechanism for high performance computing. This is especially true around analytical based use cases where there are already numerous in-memory solutions related to databases or even computing appliances. The latest entrant to this space is Apache Spark which is an open source framework for in-memory processing. Similar to Hadoop, Spark is a cluster computing system which uses memory rather than disk to cache data (see this link to learn more about Spark) as well as better support for programming frameworks other than MapReduce.
Spark should not be thought of as a replacement to Hadoop but rather a complementary solution set. An enterprise may have a Hadoop environment designed as a central place to store all types of data (Data Lake) before it is sent to other environments such as ETL staging areas for formalized reporting or Spark for ad-hoc analysis. Due to its in-memory nature, Spark will be an ideal choice for quick iterative-type analysis with a reported 100x faster performance times compared to Hadoop (even in disk, Spark is 10x faster).
As companies start to adopt this hybrid mindset, it is likely that initial Spark use cases will leverage a cloud bursting based model where an existing environment pushes certain data to an off premises based Spark as a service offering. As soon as the processing is finished, the results can be pushed back to the main environment and the Spark instance can be deprovisioned. Especially due to the currently higher prices of memory, this pay per use model is a good fit for what’s likely to be specialized and short batches of analysis.
As a note, Spark as a service will be offered on IBM Bluemix and prospective users can sign up to be among the first to use it at the following link: http://www.spark.tc/beta/.
July 15, 2015 § 1 Comment
I’m participating in updates and delivery of Cloud PoV training for sellers and one of the key messages we’re conveying is “focus on the outcomes and the offerings will follow”. The premise is that if we focus on what the client is trying to solve/achieve, we’ll better understand their context and can better recommend appropriate actions, whether in the form of solutions, services, or suggestions.
A recent example of this for me comes from a manufacturing client we’ve been engaged. The client is a current SAP customer and is in discussions with SAP about what to do next. SAP is pushing for a move to HANA and has looped us in with the client for pricing of HANA on the cloud. This is a very broad ask so we sat down with the client to understand what they’re trying to do and it became very clear there’s a disconnect with what the client wants and expects and what SAP is selling. The client is facing extreme cost pressures, changes coming from new leadership, and risks associated with old and unsupported hardware and software. The client needs to consolidate currently disparate regions and drive the company’s stock price up. The desired outcome is not about new functionality or a new platform but rather it’s about finding cost savings to the bottom line while lowering risk of outdated systems. With this as the backdrop for their outcome, HANA on the cloud doesn’t appear to be the right path forward.
It’s easy to get pulled into the specific request and take it as gospel, but we need to focus on the outcome to ensure what’s being asked is really what’s needed and that what’s needed is being addressed by the solution, service, or suggestion. Focusing on the outcome also helps so that we don’t fall into the trap of as the client noted “being a hammer and everything looks like a nail”.
July 15, 2015 § Leave a comment
Consumers all around the world are adopting more and more smart devices, presenting opportunities to engage in new ways changing the way we interact, educate, shop, and care for patients. The pace of adoption is expected to continue with 50 billion connected devices projected by 2020. That said, IBM and Apple are expanding their partnership from a healthcare perspective through the IBM Watson Health Cloud to provide a secure cloud platform and analytics for Apple’s Healthkit and Researchkit. This will support health data entered by consumers with iOS apps and enable medical researchers with a secure, open data storage solution with access to IBM’s most sophisticated data analytics capabilities. Apple’s ResearchKit has already proven the benefit of using smartphone apps for patient recruitment in just its few short months on the market. Stanford University was able to recruit 11,000 participants for a heart disease study in just 24 hours using Apple’s ResearchKit, a feat that would normally take 50 medical centers an entire year to accomplish using traditional approaches. The University of Pennsylvania had been struggling for three years to recruit enough patients for a study on the impact of exercise on breast cancer survivors. Over three years, researchers mailed 60,000 letters and recruited only 351 patients. In March of this year, the team released an Apple ResearchKit app called Share the Journey, which examines the same subject with less stringent enrollment criteria. In just one month, 2,000 patients have enrolled in the program. The future of mobile engagement 3.0 has arrived! Included below is a recent article from Mashable on the 15 trends that will shape mobile in 2015. Enjoy! http://mashable.com/2015/01/02/mobile-trends-2015/
Here are our 15 trends that will shape mobile in 2015.
1. The battle for the wearables market is about to begin.
Make no mistake, mobile tech is about to enter a brand new phase when it comes to wearables. Google Glass may have found more of a home in the business than the consumer sector — but nothing is set in stone, yet. And with the Apple Watch ready for tech-store shelves, prepare to see a push to win territory from all players. “I’m eager to see Apple Watch, how it works, how it looks,” said Andrew Whiting, vice president of marketing at Solstice Mobile (he said his company was an early adopter of Google Glass as well). “I believe that having that push from Apple is going to make all the difference.” In other words, Android Wear developers, start your engines.
2. Brands will push to engage many times via mobile (rather than sell just once).
What if your relationship with many of the brands you already use became more like a subscription for services? What if your bank, for example, interacted with you more like a Fitbit experience — your device telling you how much free cash you have at the start of each day, or helping you identify an opportunity to spend your credit card’s loyalty points in the moment at a store, airport, or event? That’s one idea on which Heather Cox, chief client experience officer at Citi, is training her eye for the next 12 months (and more). “The days of companies selling products to consumers is coming to an end,” Cox said. “The whole element of moving the position from sell to buy is something we’re going to be working on over the course of 2015 and beyond: how to engage customers with products and services very differently … the notion of the marketing funnel marketing fundamentally changes. It becomes much more about a lifecycle, that circular notion of over time — how do we catch people, using data, and actually help them in the moment?”
3. Mobile payments will grow as a local phenomenon.
Only a few weeks after its launch, Apple Pay supports cards that represent 90 % of the credit card purchase volume in the U.S. and can be used at 220,000 outlets – from national retail chains to your neighborhood store. With Starbucks’ mobile payments success as validation,” said Pascal Caillon, general manager of Proxama’s North American operations, “consumers will soon be more inclined to use their phone to purchase low-value, daily items as a starting point. Merchants in these sectors will set the industry standard and will be the ones to watch.”
4. The mobile-payment race will enter its global stretch.
“Move over mobile payments — it’s all about global mobile payments now,” said Nataly Kelly, vice president of marketing for Smartling. “With a surge in global tourism fueled by the emerging middle-class in markets such as Asia, combined with the fact that more apps than ever are being localized into 10 or more languages, now app developers and mobile marketers will be challenged to support international currencies for people who might be traveling abroad.” Or, for that matter, to support travelers and residents who are downloading the app in another country. Point is, mobile payments will move toward an international level of functionality.
5. Competition for the connected home will intensify.
With Apple’s HomeKit already out there, and the Thread Group’s wireless networking protocol poised to capture a similar audience, the charge is on for a slice of the smart-home sector. “In the coming year, expect these two camps to furiously court developers into their ecosystems,” said Coby Sella, chief executive officer at Sansa Security. “Early rumblings suggest that Apple’s HomeKit will be a closed ecosystem, akin to the company’s App Store, while Thread Group will be more open, much like Google Play. No matter which protocol becomes the de facto standard, telcos and service providers such as Comcast and ATT, and alarm-systems companies such as ADT, will have to make sure everything they deploy works with both.”
6. Vehicles will edge toward next-gen mobile integration.
Disruptive trends in 2015 will not be limited to portable and body-worn devices. They’ll also continue to find their way into our vehicles. “Today’s traffic-aware GPS will evolve to providing in-vehicle Wi Fi and enhanced location-aware, pushed information services downloaded to the vehicle,” said Stu Lipoff, IEEE Fellow and engineering consultant. “Heads-up display should roll out on some premium vehicles to display status, guidance, and augmented virtual overlays on the windshield.”
7. The Internet of Things will expand its footprint (but hold on a minute).
Keeping in mind the probability that wearables will make some kind of significant mark in 2015, the advent of these Internet of Things accessories may well amount to part of what is more a reset than a revolution, in the coming year. “The technology is there, but consumer awareness is not,” said Matthew Davis, vice president of product marketing at StepLeader. “Companies and marketers haven’t convinced the U.S. public that wearables, smart homes, and connected cars are must haves. They are still nice to haves.”
8. Prepare for a data-request pushback.
Developers will increasingly feel pressure to cut back on building mobile apps with data collection that’s unnecessary for core functionality. “Examples such as a flashlight app that taps a user’s geolocation and accesses user’s cameras and their calendars are raising some red flags,” said Domingo Guerra, president and co-founder of Appthority. “The argument by developers, that they need to monetize, will increasingly hold less water as enterprises and users recognize the true cost of ‘free’ apps and require more transparency and stronger reasoning from developers … Developers that recognize this trend will be able to differentiate their app in a sea of competition by offering better security and privacy than their competitors.”
9. 2015 will be the year when mobile becomes a target.
Attendant to the rise of our increasingly mobile-savvy ranks and widespread mobile penetration, someone somewhere is going to attempt a bad thing. “At least one corporate data breach will be traced back to a compromised mobile device which was used to access corporate networks after the compromised device is brought into the enterprise and connects to a trusted enterprise wireless network,” said Dwayne Melancon, chief technology officer at Tripwire. “iOS will continue to see a gradual increase of malware that’s targeting both jail-broken and non jail-broken devices.” Companies and security systems and consumers themselves will need to be vigilant and enable two-factor authentication and other security measures to protect their data.
10. The screen-agnostic experience will grow, along with broader platform integration.
“Seamless context transfer across devices will be the new big app feature,” said Sravish Sridhar, founder and CEO of Kinvey. “Apps are increasingly becoming experiences that live across multiple endpoints — from wearables to phones, tablets, and web applications.” As this trend proceeds in 2015, offerings that can seamlessly transfer between these states as you move from one device to the next will have a huge advantage.
11. Brands and retailers will pay additional attention to m-commerce opportunities.
Retailers’ native apps will further leverage the barcodes that shoppers often scan with their mobile devices. That can mean stores and brands that are better equipped to keep customers engaged, drive sales, and increase customer loyalty and retention. “Retailers now recognize the power of barcode scanning in the context of high performance and reliable native mobile applications,” said Samuel Mueller, chief executive officer at Scandit. “We expect the trend of mobile-enabled commerce to continue throughout the 2014 holiday shopping season and into 2015 and beyond.”
12. Travelers will increasingly switch to brands’ apps for bookings.
“Companies such as TripAdvisor, Hipmunk, Skyscanner, trivago and Dohop are going to make it increasingly easier next year to book flights and hotels right within their apps instead of sending consumers off to airline, hotel, or online travel-agency websites to complete their bookings,” said Dennis Schaal, news editor at Skift. “Another thing is that Expedia, Hipmunk, and others are increasingly making it easier to start your trip research on a smartphone, continue it on a laptop, and then pick it right up again on a tablet or smartphone — right where you left off.”
13. Health and nutrition monitoring will expand.
“In 2015, health and nutrition monitoring will achieve previously unthinkable breadth and depth,” said Rameet Chawla, founder of Fueled. That means that your mobile and wearable devices will generate real-time data regarding your individual body — tracking blood glucose levels following meals, sleep quality as indicated by REM cycles, carbon dioxide levels in your muscles, and the like. Not to mention smart armbands for workout-related notifications, and smart shirts that can notify you about stress levels or an elevated heart rate.
14. BYOD policies will shift within companies.
“In the face of distributed workforces and project-based working groups that include third-party contractors, apps need to be shared across borders without the limitation of BYOD policies to secure devices,” said Art Landro, chief executive officer at Sencha. “Instead of practicing intrusive control over internal and external devices, enterprises will be turning to web-based tools that efficiently silo private and corporate data on the same device,” he said. That means allowing IT departments to secure corporate information without mucking about with this one smartphone or that one employee’s tablet. Prepare for more across-the-board solutions.
15. Business will dive deeper into internal mobile-first deployment (and desktops will follow the format).
“For years the value proposition has been utterly obvious,” said Matt Calkins, chairman and CEO at Appian, “and yet business have held back from mobile-first behaviors.” In the coming months, companies that aren’t already mobile-centric will start to cross the divide to increasingly screen-agnostic mobile platforms — allowing employees to keep working, no matter what devices they’re using. Furthermore, Calkins predicts, desktop iterations of software will start to emulate those of the relative mobile interface. “The result of the battle between mobile and desktop apps will be as follows,” he said. “Mobile wins, the device wins, the format wins on the desktop environment.”