Welcome to this free eBook on Office 365 and Microsoft 365 brought to you by Altaro Software. We’re going to show you how to get the most out of these powerful cloud packages and improve your business. This book follows an informal reference format providing an overview of the most powerful applications of each platform’s feature set in addition to links directing to supporting information and further reading if you want to dig further into a specific topic. The intended audience for this book is administrators and IT staff who are either preparing to migrate to Office/Microsoft 365 or who have already migrated and who need to get the lay of the land. If you’re a developer looking to create applications and services on top of the Microsoft 365 platform, this book is not for you. If you’re a business decision-maker, rather than a technical implementer, this book will give you a good introduction to what you can expect when your organization has been migrated to the cloud and ways you can adopt various services in Microsoft 365 to improve the efficiency of your business.
THE BASICS
We’ll cover the differences (and why one might be more appropriate for you than the other) in more detail later but to start off let’s just clarify what each software package encompasses in a nutshell. Office 365 (from now on referred to as O365) 7 is an email collaboration and a host of other services provided as a Software as a Service (SaaS) whereas Microsoft 365 (M365) is Office 365 plus Azure Active Directory Premium, Intune – cloud-based management of devices and security and Windows 10 Enterprise. Both are per user-based subscription services that require no (or very little) infrastructure deployments on-premises.
The primary goal of a multi-cloud data management strategy is to supply data, either via copying or moving data to the various multi-cloud use cases. A key enabler of this movement is the data management software applications. In theory, data protection applications can perform both of the copy and move functions. A key consideration is how the multi-cloud data management experience is unified. In most cases, data protection applications ignore the user experience of each cloud and use their proprietary interface as the unifying entity, which increases complexity.
There are a variety of reasons organizations may want to leverage multiple clouds. The first use case is to use public cloud storage as a backup mirror to an on-premises data protection process. Using public cloud storage as a backup mirror enables the organization to automatically off-site data. It also sets up many of the more advanced use cases.
Another use case is using the cloud for disaster recovery.
Another use case is “Lift and Shift,” which means the organization wants to run the application in the cloud natively. Initial steps in the “lift and shift” use case are similar to Dev/Test, but now the workload is storing unique data in the cloud.
Multi-cloud is a reality now for most organizations and managing the movement of data between these clouds is critical.
There are a number of limitations today keeping organizations from not only lifting and shifting from one cloud to another but also migrating across clouds. Organizations need the flexibility to leverage multiple clouds and move applications and workloads around freely, whether for data reuse or for disaster recovery. This is where the HYCU Protégé platform comes in. HYCU Protégé is positioned as a complete multi-cloud data protection and disaster recovery-as-a-service solution. It includes a number of capabilities that make it relevant and notable compared with other approaches in the market:
The cloud computing era is well and truly upon us, and knowing how to take advantage of the benefits of this computing paradigm while maintaining security, manageability, and cost control are vital skills for any IT professional in 2020 and beyond. And its importance is only getting greater.
In this eBook, we’re going to focus on Infrastructure as a Service (IaaS) on Microsoft’s Azure platform - learning how to create VMs, size them correctly, manage storage, networking, and security, along with backup best practices. You’ll also learn how to operate groups of VMs, deploy resources based on templates, managing security and automate your infrastructure. If you currently have VMs in your own datacenter and are looking to migrate to Azure, we’ll also teach you that.
If you’re new to the cloud (or have experience with AWS/GCP but not Azure), this book will cover the basics as well as more advanced skills. Given how fast things change in the cloud, we’ll cover the why (as well as the how) so that as features and interfaces are updated, you’ll have the theoretical knowledge to effectively adapt and know how to proceed.
You’ll benefit most from this book if you actively follow along with the tutorials. We will be going through terms and definitions as we go – learning by doing has always been my preferred way of education. If you don’t have access to an Azure subscription, you can sign up for a free trial with Microsoft. This will give you 30 days 6 to use $200 USD worth of Azure resources, along with 12 months of free resources. Note that most of these “12 months” services aren’t related to IaaS VMs (apart from a few SSD based virtual disks and a small VM that you can run for 750 hours a month) so be sure to get everything covered on the IaaS side before your trial expires. There are also another 25 services that have free tiers “forever”.
Now you know what’s in store, let’s get started!
IT Infrastructure needs are constantly fluctuating in a world where powerful emerging software applications such as artificial intelligence can create, transform, and remodel markets in a few months or even weeks. While the public cloud is a flexible solution, it doesn’t solve every data center need—especially when businesses need to physically control their data on premises. This leads to overspend— purchasing servers and equipment to meet peak demand at all times. The result? Expensive equipment sitting idle during non-peak times. For years, companies have wrestled with overspend and underutilization of equipment, but now businesses can reduce cap-ex and rein in operational expenditures for underused hardware with software-defined composable infrastructure. With a true composable infrastructure solution, businesses realize optimal performance of IT resources while improving business agility. In addition, composable infrastructure allows organizations to take better advantage of the most data-intensive applications on existing hardware while preparing for future, disaggregated growth.
Download this report to see how composable infrastructure helps you deploy faster, effectively utilize existing hardware, rein in capital expenses, and more.
Not too long ago, a copy of Randall Munroe’s “Thing Explainer” made its way around the SolarWinds office—passing from engineering to marketing to development to the Head Geeks™ (yes, that’s actually a job at SolarWinds. It’s pretty cool.), and even to management.
Amid chuckles of appreciation, we recognized Munroe had struck upon a deeper truth: as IT practitioners, we’re often asked to describe complex technical ideas or solutions. However, often it’s for folks who need a simplified version. These may be people who consider themselves non-technical, but just as easily it could be for people who are technical in a different discipline. Amid frustrated eye-rolling we’re asked to “explain it to me like I’m five years old” (a phrase shortened to just “Explain Like I’m Five,” or ELI5, in forums across the internet).
There, amid the blueprints and stick figures, were explanations of the most complex concepts in hyper-simplified language that had achieved the impossible alchemy of being amusing, engaging, and accurate.
We were inspired. What you hold in your hands (or read on your screen) is the result of this inspiration.
In this book, we hope to do for IT what Randall Munroe did for rockets, microwaves, and cell phones: explain what they are, what they do, and how they work in terms anyone can understand, and in a way that may even inspire a laugh or two.
Make the Move: Linux Remote Desktops Made Easy
Securely run Linux applications and desktops from the cloud or your data center.
Download this guide and learn...
Assess what you already have
If you have a business continuity plan or a disaster recovery plan in place, that’s a good place to start. This scenario may not fit the definition of disaster that you originally intended, but it can serve to help you test your plan in a more controlled fashion that can benefit both your current situation by giving you a head start, and your overall plan by revealing gaps that would be more problematic in a more urgent or catastrophic environment with less time to prepare and implement.
Does your plan include access to remote desktops in a data center or the cloud? If so, and you already have a service in place ready to transition or expand, you’re well on your way.
Read the guide to learn what it takes for IT teams to set up staff to work effectively from home with virtual desktop deployments. Learn how to get started, if you’re new to VDI or if you already have an existing remote desktop scenario but are looking for alternatives.
A traditional VDI model can come with high licensing costs, limited opportunity to mix and match components to suit your needs, not to mention the fact that you're locked into a single vendor.
We've compiled a list of 5 reasons to think outside the traditional VDI box, so you can see what is possible by choosing your own key components, not just the ones you're locked into with a full stack solution.
The future of compute is in the cloud
Flexible, efficient, and economical, the cloud is no longer a question - it's the answer.
IT professionals that once considered if or when to migrate to the cloud are now talking about how. Earlier this year, we reached out to thousands of IT professionals to learn more about how.
Private Cloud, On-Prem, Public Cloud, Hybrid, Multicloud - each of these deployment models offers unique advantages and challenges. We asked IT decision-makers how they are currently leveraging the cloud and how they plan to grow.
Survey respondents overwhelmingly believed in the importance of a hybrid or multicloud strategy, regardless of whether they had actually implemented one themselves.
The top reasons for moving workloads between clouds
Companies have NAS systems all over the place—hardware-centric devices that make data difficult to migrate and leverage to support the business. It’s natural that companies would desire to consolidate those systems, and vFilO is a technology that could prove to be quite useful as an assimilation tool. Best of all, there’s no need to replace everything. A business can modernize its IT environment and finally achieve a unified view, plus gain more control and efficiency via the new “data layer” sitting on top of the hardware. When those old silos finally disappear, employees will discover they can find whatever information they need by examining and searching what appears to be one big catalog for a large pool of resources.
And for IT, the capacity-balancing capability should have especially strong appeal. With it, file and object data can shuffle around and be balanced for efficiency without IT or anyone needing to deal with silos. Today, too many organizations still perform capacity balancing work manually—putting some files on a different NAS system because the first one started running out of room. It’s time for those days to end. DataCore, with its 20-year history offering SANsymphony, is a vendor in a great position to deliver this new type of solution, one that essentially virtualizes NAS and object systems and even includes keyword search capabilities to help companies use their data to become stronger, more competitive, and more profitable.
DataCore vFilO is a top-tier file virtualization solution. Not only can it serve as a global file system, IT can also add new NAS systems or file servers to the environment without having to remap users of the new hardware. vFilO supports live migration of data between the storage systems it has assimilated and leverages the capabilities of the global file system and the software’s policy-driven data management to move older data to less expensive storage automatically; either high capacity NAS or an object storage system. vFilO also transparently moves data from NFS/SMB to object storage. If the user needs access to this data in the future, they access it like they always have. To them, the data has not moved.
The ROI of File virtualization is powerful, but it has struggled to gain adoption in the data center. File Virtualization needs to be explained, and explaining it takes time. vFilO more than meets the requirements to qualify as a top tier file virtualization solution. DataCore has the advantage of over 10,000 customers that are much more likely to be receptive to the concept since they have already embraced block storage virtualization with SANSymphony. Building on its customer base as a beachhead, DataCore can then expand File Virtualization’s reach to new customers, who, because of the changing state of unstructured data, may finally be receptive to the concept. At the same time, these new file virtualization customers may be amenable to virtualizing block storage, and it may open up new doors for SANSymphony.