Archive for the ‘virtualization’ Category
This Design Guide is focused on the design of the cloud infrastructure and the components that make up a cloud infrastructure. It does not provide information on how build a complete private cloud, public cloud, or hosted cloud infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS) solution. The cloud infrastructure contains the building blocks on which any Windows Server 2012 cloud service or delivery model is built.
This document is comprised of the following sections:
- Cloud Infrastructure Technical Overview. This section provides a short overview of cloud computing and the requirements of a cloud infrastructure.
- Cloud Infrastructure Design. This section provides an introduction to the cloud infrastructure design process.
- Designing the Cloud Storage Infrastructure. This section provides information related to design considerations for building the cloud storage infrastructure using Windows Server 2012 platform features and capabilities.
- Designing the Cloud Network Infrastructure. This section provides information related to design considerations for building the cloud network infrastructure by using Windows Server 2012 platform features and capabilities.
- Designing the Cloud Compute (Virtualization) Infrastructure. This section provides information related to design considerations for building the cloud compute (virtualization) infrastructure using Windows Server 2012 platform features and capabilities.
- Overview of Suggested Cloud Infrastructure Deployment Scenarios. This section provides information on three suggested cloud infrastructure deployment scenarios and the design decisions that drive selecting one over the other.
Ignoring the poor form of quoting oneself; in a post last year I commented on “amount of IT infrastructure capability it delivers as standard” in Windows Server 2012 and “Microsoft’s learning from the demands of running infrastructure at large scale with virtualization as an integrated part of that”. Microsoft’s recent announcement of System Center 2012 SP1 seems to reinforce this view.
Also on learning from scale: How Xbox can transform your datacentre
The Register’s view on How to build a perfect private cloud with Windows Server 2012 shows how this might all be put together on-premise.
That article also raises a key point about application availability and whether that is delivered by the application or the infrastructure. The move to application replication that we saw with, for example, database availability groups in Exchange 2010 and the use of local storage in that application, has begged questions about when is a SAN functionality required (thinking hardware-based storage replication) and raises the possibility of replication to public cloud. Where to place the responsibility for application availability is tricky as infrastructure architects may be reliant on platform or application architects to be aware of what availability models are in the application; that information could surface through technology roadmapping and vendor management. The separation of the application (software), platform and infrastructure layers in private cloud architectures can be seen in both the Microsoft model:
and the Cisco Domain Ten blueprint; for more on the latter see Introducing Cisco Domain Ten(SM) – Cisco Services’ Blueprint for Simplifying Data Center and Cloud Transformation.
Ivan Pepelnjak’s take: HYPER-V NETWORK VIRTUALIZATION (WNV/NVGRE): SIMPLY AMAZING
WNV is just part of the networking innovation in Windows Server 2012. See the Windows Server 2012 Networking Whitepaper for more.
The RTM of Windows Server 2012 is imminent, here’s some background reading.
Early reflections on Windows Server 2012 (Was: “Offloaded Data Transfer (ODX) in Windows 8 and Windows Server 2012”)
I have always felt a little disappointed by the “SANs” that I have encountered, possibly because I have never gotten to use a top of the range product but also because those that I have encountered seem unable to avoid accumulating data from standard applications like file and print. It often seems that once you have spent a big chunk of money on a centralised storage system it becomes inevitable that all storage moves there due to the reluctance to buy any more direct attached storage and “ease of management” and “integration with backup”. However JBODs just keep falling in price; my experience (just mine no general reflection intended here) with Exchange had the following storage profile:
Exchange 5 /Exchange 2000 – DAS array
Exchange 2003 combined roles – SAN based : split database and logs, performance hampered by not being able to afford enough spindles, surprisingly unlucky with 1018 corruptions
Exchange 2007 clustered mailbox roles – DAS array : storage group best practice for LUN allocation etc, just worked but ESE improvements (single bit error correction) make comparison with my Exchange 2003 on SAN experience difficult. Mailboxes became very large due to business needs this started hurting performance.
Exchange 2010 combined roles – DAS : DAG, the application handles the replication/availability. Excellent support for large mailboxes.
The reason for writing about this, which has nothing to do with Exchange, is that watching recent TechEd presentations on Windows 8 and Windows Server 2012 I saw some of the demos on Offloaded Data Transfer (ODX) and I guess this is the sort of heavy-lifting handoff that I always hoped for when having paid for a storage array. For the detail see:
I wanted to give this feature a dedicated post as, in a way, for me it is singular in that it relates to a “high end” hardware capability whereas, as I learn more about Windows Server 2012 the truly remarkable thing to me is amount of IT infrastructure capability it delivers as standard, in the areas of storage and filesystem alone any vendor delivering just those components as present in Windows Server 2012 would be a major player. Microsoft server releases since Windows 2000 have felt to me like continuous evolution; Windows Server 2012 feels like punctuated evolution, a step change brought about it seems from Microsoft’s learning from the demands of running infrastructure at large scale with virtualization as an integrated part of that. As Novell and a need for scalability were a spur that drove innovation in Windows Server 2000, so VMware in the enterprise and Amazon AWS in the cloud, and again the need for scalability, seem to be a spur to Windows Server 2012.