Hyper-Converged, Elastic and Experience Based Design to Support Learning
Sitting back and reflecting on the past few years and looking forward to the next few years, I realized there is a significant transformation of business drivers happening in the learning space. There are paradigm changes occurring in education like MOOCs, flipped classroom, blended learning and many others. In addition, there are other changes happening in outcome based education, social learning, design learning and deep learning.
"Virtualization technology is moving beyond doing just server virtualization, and into the “add-value-to business” with application virtualization"
Then, there are the learners themselves. They used to be well-defined population (with a few outliers). This is no longer true. The “outliers” we thought of in the past are now mainstream. They range from 16 years old and continue into their 60’s. They are also not coming from one specific geographic area any more. These same learners are also coming in with different skill sets. The digital divide often discussed in the past is no longer a significant issue. The rate of enhancements of technology in this space, along with increased consumer adoption has narrowed this gap significantly.
These changes led to creating two business goals. The first is to provide the students access to the application (software) resources for their course. Easy access to technology translates to a user being able to access software resources on their own device at any time. The resources might be word processing or compute intensive software like a graphics applications. A student’s experience with these applications should be the same, whether accessed from the US or Shanghai. The key is to provide consistency of service, irrespective of time, location or device.
The second goal relates to operating and financial business drivers. Sustainability is one of The New School’s key agenda items. Reducing power consumption and HVAC usage, while also repurposing space currently used to support IT demands back to teaching departments, adds value. For example, there is significant square footage allocated to open labs, which have hundreds of computers with software installed. Creating a solution for students to access software applications outside of such open labs would allow those spaces to be reallocated back to the departments, opening up much needed space, which is a premium in New York City. This also reduces the need for extra power and cooling.
Technology as Enabler
Achieving these goals required us to look at how we leverage technology as an enabler. Virtualization was our answer. The plan was to layer application virtualization on top of our existing virtualized servers. We needed to create a “Tomorrow’s Platform” for software application use.
Tomorrow’s Platform is a single platform regardless of location or workload, leveraging both on-premises and in-cloud platforms as one. Ideally, it has follow-me persona and data. Most importantly, it needs to function with a consistent experience and be agnostic to the type of device. The platform should have the ability to provide rapid updates and real-time application delivery, and be intelligent enough to migrate workloads based on business and environmental demands. Also, the platform should possess the ability to proactively identify and mitigate issues while being transparent to the user.
The model to run this platform needs to be composed of the right mix of technology, operating and financial models. The operating model should have the asset base optimized for run rate and/or predictable use and be able to couple or uncouple financial decisions from the process of change.
The proposed key differentiators in the designed architecture helped us achieve our goals without compromising the defined models. The areas that we spent our time on were (1), (2) the platform being elastic and (3) the experience.
Hyper-converged architecture – the design was to combine an on-premise architecture that served applications and to use the public cloud for storage. By doing this we were able to leverage our existing infrastructure/services, even expand on it. We selected VMware’s Horizon to virtualize our applications. We had Google Drive as a storage solution where users could save their work. They have large storage capabilities compared to what we could provide the users. In our design, we wanted to use this public cloud as our storage solution, but questioned how we could bring this as a mapped drive into our app serving environment. Expan Drive made this possible for us with their application that maps the common external public cloud storage to the local computer as a “network attached drive.” This application provided the capability to map Google Drive. The fundamental design we did was to seamlessly converge many discrete public cloud solutions with the on-premise structure to provide a single cohesive environment to the user.
Elastic platform– Based on the applications the student wanted to run, the system should allocate only the right amount of resources (CPU, memory). This led us to look at profiles of all our applications and who uses them. As seen in the diagram, one could categorize the users into four categories–those who do basic tasks, knowledge workers who use productivity applications, power users like researchers and designers who need a lot of compute power. The platform design now looked at the various profile/categories to define class types to represent a set of applications. This allowed the system to dynamically allocate the resources, based on the type of user and the application. It also gave the system ability to balance the load. The highest class of applications requiring significant compute power will be directed to a GPU grid. This allowed us to create an elastic nature to the system, only using resources that are needed at that particular time, yet increasing them dynamically as demand increases. The elasticity present in the design helps to achieve both the access and sustainability goals.
Experience–The final and most critical factor is the experience a user has when interacting with the platform. The analogy will help here. If you notice a duck in water, it is calm on the surface and looks like it is gliding effortlessly. However, when you look under the surface of the water, it is paddling like crazy. The experience we want for the user is the “duck above water.” We are trying to emulate online the same experience for a user as an open or research lab. This translates into three major components – speed, access and consistency. The type of hardware used addressed issues of speed. Simple and intuitive access was designed for the users as they interacted with the system. Irrespective of the type device used to access the applications or where geographically located, the experience was consistent. Administrators could proactively add application licenses if they see a need.
In conclusion, virtualization technology is moving beyond doing just server virtualization, and into the “add-value-to business” with application virtualization. Much remains to be done in this space, though vendors, like VMware, are constantly pushing the boundary. Importantly, the feedback received from users on their experience with the system, whether an 18 year old student or a 70 year old researcher, has been tremendous.