infrastructure
Using Technology to Combat Climate Change

Using Technology to Combat Climate Change

Ocean waters are warming and becoming more acidic, ice caps are melting, and sea levels are rising. Warmer global temperatures affect our water supplies, agriculture, power and transportation systems, the natural environment, and even our own health and safety. Multiple studies published in peer-reviewed scientific journals show that 97 percent or more of actively publishing climate scientists agree 1: Climate-warming trends over the past century are mostly due to human activities. While technology has played its part in causing climate change it can also help us get to solutions. Here are five initiatives taking place in the technology community that can fight climate change: 5 Initiatives in the Technology Community to Fight Climate Change 1. Data centers The world’s most influential companies including Apple, NIKE, IKEA, Johnson & Johnson, and Starbucks, representing over US $1 trillion in annual revenue, are committed to 100% renewable power. Much of the energy used in data centers is not from the actual technology. Instead, it’s from cooling the servers. As well as delivering on emission reduction goals, renewable power can help manage fluctuating energy costs, improve reputation and provide energy security. It also shows business leadership on climate change. This could have massive impact if paired alongside robust government policy that boosts confidence and enables long-term investments. 2. Mobile apps It takes some digging to find apps that will help you create real change on a daily basis, but they’re out there. Here are some examples of apps that can help you monitor and reduce your carbon footprint and waste: Oroeco is an app that tracks your carbon footprint by placing a carbon value on everything you buy, eat, and do, and then shows you how you compare with your neighbors. PaperKarma is an easy way to cut paper waste. Take a photo of your junk mail, send it through the app, and PaperKarma will figure out what it is and take you off the mailing list. GiveO2 tracks your carbon footprint as you travel. Turn on the tracker when you start a new trip, and it will automatically calculate a timeline of your carbon usage. At the end, you can “offset” it by supporting a sustainable project of your choice.   3. IoT Monitoring our energy usage makes it possible to be smarter about it. Take Nest, for instance. While an un-programmed thermostat can waste 20% of heating and cooling, Nest tackles the issue with a smart thermostat that learns your patterns and automatically adjusts to save energy. The Internet of Things can save energy and carbon footprints with things as simple as using an app to turn off the lights or with apps like IFTTT, which hooks up to many different types of systems. The IoT can also involve monitoring your sprinkler system to save water, or use sensors to tell you to take a different route when driving to avoid idling in traffic and wasting gas. 4. Open source movement Open data and open source technologies are a huge way to accelerate environmental research and innovation. Take Tesla, for example. By opening the company’s patents to everyone, Elon Musk wanted to make sure electric vehicles succeeded faster. 5. Mapping Interactive maps really drive home the point of climate change and can lead the way to remedies. Map layers defining vegetation, soil type, geology, precipitation, and human infrastructure can help model and plan for future change. New mapping technology can make us safer and less reliant on fossil fuels. The U.S Geologic Survey’s 3D Elevation Program is being developed to use advanced mapping to better update hazard maps for floods and earthquakes and find out where the best areas for solar and wind farms. As you can see, many of these things only require small changes from individuals in order to make a difference for our climate. Some will require much more intentional decisions from businesses. The good news, however, is that with this intentionality, individuals and corporations alike can take action to help our climate. If we’re all in this together, perhaps it’s time that we take a look – individually and corporately – at how we can make a difference. 1. J. Cook, et al, “Consensus on consensus: a synthesis of consensus estimates on human-caused global warming,” Environmental Research Letters Vol. 11 No. 4, (13 April 2016); DOI:10.1088/1748-9326/11/4/048002

infrastructure
Are Personal VPN Services for you - Procern Blog Featured Image

Are Personal VPN Services for you?

What VPN services are a Good Fit for you? I am certain that many people are familiar with the concept of a VPN or virtual private network. They are used to gain access to a corporate network while traveling or at home.  I would assert though that few are familiar with the growing market of personal VPN providers who target non-commercial entities and individuals. According to Grand View Research: The global virtual private network market size was estimated at USD 41.33 billion in 2022 and is projected to reach USD 151.92 billion by 2030, growing at a CAGR of 17.7% from 2023 to 2030. This emerging market has many players.  Understanding which to use can be quite confusing to someone who is not well informed on the topic. We will explore the concepts of how they differ from the often-provided corporate VPN’s. As well as discuss how to use them to protect your private information aside from how the company protects itself. Corporate vs. Personal VPN Most business travelers do utilize corporate VPN connections to gain access to company files and services.  When the traffic is not destined to the business networks, the traffic is not tunneled back to the home office.  This configuration is called split tunneling. It is a very common configuration. Network engineers configure this so that users will not saturate the VPN concentrators with YouTube, Netflix and other non-business-related traffic. Additionally, as business travelers it is incumbent on us to use company assets such as bandwidth in a manner which is in line with our companies’ policies. How do the personal VPN services work? Most if not all personal VPNs use a client-based software to encapsulate and encrypt the traffic. This makes it very hard to unscramble if someone is capturing data from an open network. The other mechanism used is to proxy the traffic to a third party where it is then sent to the eventual destination.   The proxy or redirect mechanism ensures that the transmitted information doesn’t contain the public IP of the coffee shop or public network but the source IP address of the proxy service.  This is important when connecting to financial institutions that monitor the public IP address that you normally connect from. If for some reason another person attempts to connect to your bank account through the bank’s web portal, the bank would notice that it is not the normal public IP address that you would typically connect from and disallow the login attempt. Who are they for? Firstly, who should use a personal VPN?  The answer is anyone who regularly uses free and open Wi-Fi at public places.  Anytime you connect to the coffee shop Wi-Fi or to a hotels guest network connection, you are often at risk of someone intercepting the unencrypted traffic you are sending and receiving.  This vulnerability has existed for many years and is called a, “man in the middle attack”. This sort of intrusion used to be carried out by very skilled hackers but these days the attack is very well documented.  Toolkits to exploit these common scenarios are being used by very unskilled characters. Personal VPNs circumvent the vulnerabilities of open networks by using encapsulation and traffic proxying so that the connection is more secure and free from prying eyes. Good rules to live by regarding when you should use a VPN include: Are you logging in to a private or personal account of any kind? Are you transmitting information that is proprietary? Is there personal or customer information being transmitted? Is banking or financial information being received? If any of these conditions exist, I would recommend using a personal VPN to protect yourself.  The many protective and anonymizing mechanisms that the VPNs employ would allow you to safely transmit and receive any data without the risk of intrusion. This article from PCMag.com gives a great overview of how VPNs work and how each VPN company compared to the others. How can ProCern help? At ProCern, our expertise and offerings focus on corporate VPNs and other firewall services. We find it equally important that our clients and partners understand the risks involved when not utilizing some protective measures when they connect to open public Wi-Fi systems. We have all heard of the unlucky Hollywood stars who have had their personal accounts targeted or hacked at great expense to themselves.  Reputations are very important in business and in private life. Remember that the weakest link is not the traffic we monitor, it is the traffic we do not.

infrastructure
Linux- The Swiss Army Knife of Operating Systems - Procern Blog Featured Image

Linux- The Swiss Army Knife of Operating Systems

What is Linux? Strictly speaking, Linux is the kernel, or core of Linux distributions.  I like to think of the Linux kernel like the base plate for Legos. It’s where all other pieces attach to. A distribution, or “distro” for short, is a complete operating system including a kernel, packages, package managers, and everything else needed.  In other words, distros are pre-assembled building block sets depending on user preference and needs.  Red Hat, SUSE, and Ubuntu are examples of some of the more popular distros.  There are too many others to list here, but here are some resources to give you an idea. Resource One Resource Two Linux Use Cases As a long time Linux user, I can understand why it didn’t take off as a general purpose operating system like MS Windows.  There are just too many choices, differences between those choices, and perceived lack of standardization.  There’s also the reputation of being difficult to use. Why then would anyone want to use Linux, instead of Microsoft Windows or Mac OS X for example?  Like the number of distros, the number of Linux use cases is also very extensive, so I’ll cover just a few popular ones. IoT Open source software is typically free, though some commercially backed distros such as RHEL do charge subscription fees.  Linux runs on many different types of hardware such as IoT devices, personal computers, networking equipment, load balancers, supercomputers, and just about anything it seems. It can run on very low end or less common hardware.  This makes it perfect for IoT devices, where processing power may be limited, and cost needs to be kept down. Related: Linux Logical Volume Manager Overview “The Cloud” It also powers very high end hardware, including much of what powers “the cloud.”  Servers powering the internet need a reliable operating system that can run continuously without downtime, while maintaining a high level of security.  There are far fewer circumstances in which Linux based OSes require a reboot. Though I wouldn’t recommend it for most use cases, it is certainly possible for Linux devices to run continuously for YEARS without a reboot.  It is also much easier to avoid viruses and malware.  This is great for web servers, databases, load balancers, routers, switches, firewalls, storage servers, virtual machine hypervisors, and many other pieces of critical IT infrastructure. Containers The trend to “containerize” everything has taken the world by storm.  Though MS Windows containers are now an option, until recently Linux was your only option.  It is a much more mature platform for containers.  There is much better documentation and support for containers on Linux.  It is much lighter weight which allows for much denser deployments, as well as portability. One example of a popular container OS is Alpine Linux .  ” It is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.” AI AI, machine learning, and deep learning are also getting a lot of attention these days.  Linux offers a number of advantages in this space, including better integration with containers.  There are many examples and lots of documentation to help someone building an AI project on Ubuntu for example. Want to train your model in the cloud, but deploy at the edge to a low powered IoT device and/or container?  You will likely have a much easier time, along with better and more predictable results on Linux. If you are considering an IoT or AI project, and/or the infrastructure required to support it, ProCern has the expertise. Schedule an assessment today, we’d love to help!

infrastructure
Linux Logical Volume Manager Overview - Procern Blog Featured Image

Linux Logical Volume Manager Overview

Do your Linux servers use LVM? If not, you should strongly consider it.  Unless, you are using ZFS, BTRFS, or other “controversial” filesystems.  ZFS and BTRFS are outside of scope for this discussion but are definitely worth reviewing if you haven’t heard of them and are running Linux in your environment. Logical Volume Manager for Linux is a proven storage technology created in 1998.  It offers layers of abstraction between your storage devices in your Linux system and the filesystems that live on them.  Why would you want to add an extra layer between your servers and their storage you might ask?   Here are some reasons: Flexibility You can add more physical storage devices if needed, and present them as a single filesystem. Online maintenance – Need to grow or shrink your filesystems, online, and in real-time?  This is possible with LVM. It is possible to live migrate your data to new storage. Thin provisioning can be done, which can allow you to over-commit your storage if you really want to. Device naming – you can name your devices something that makes sense instead of whatever name Linux gives the device. Meaningful device names like Data, App, or DB are easier to understand than SDA, SDB, SDC This also has the benefit of reducing mistakes when working with block devices directly. Performance – it is possible to stripe your disks and improve performance. Redundancy – it is also possible to add fault tolerance to ensure data availability. Snapshots This is one of my favorite reasons for using LVM. You can take point-in-time snapshots of your system Those snapshots can then be copied off somewhere else. It is also possible to mount the snapshots and manipulate the data more granularly. Want to do something risky on your system, and if it doesn’t work out, have a quick rollback path?  LVM is perfect for this.   So how does it work? According to Red Hat :“Logical Volume Management (LVM) presents a simple logical view of underlying physical storage space, such as hard drives or LUNs. Partitions on physical storage are represented as physical volumes that can be grouped together into volume groups. Each volume group can be divided into multiple logical volumes, each of which is analogous to a standard disk partition. Therefore, LVM logical volumes function as partitions that can span multiple physical disks.” I think LVM is much easier to understand with a diagram.  The above image illustrates some of the concepts involved with LVM.  Physical storage devices recognized by the system can be presented as PVs (Physical Volumes).  These PVs can either be the entire raw disk, or partitions, as illustrated above.  A VG (Volume Group) is composed of one or more PVs.  This is a storage pool, and it is possible to expand it by adding more PVs.  It is even possible to mix and match storage technologies within a VG.  The VG can then allocate LVs (Logical Volumes) from the pool of storage, which is seen as raw devices.  These devices would then get formatted with the file system of your choice.  They can grow or shrink as needed, so long as space is available in either direction for the operation.   You really should be using LVM on your Linux servers. Without LVM, many of these operations discussed above are typically offline, risky, and painful.  These all amount to downtime, which we in IT like to avoid.  While some may argue that the additional abstractions add unnecessary complexity, I would argue that LVM really isn’t that complicated once you get to know it.  The value of using LVM greatly outweighs the complexity in my opinion. The value proposition is even greater when using LVM on physical Linux nodes using local storage.  SAN storage and virtual environments in hypervisors typically have snapshot capabilities built-in, but even those do not offer all of the benefits of LVM.  It also offers another layer of protection in those instances.  Alternatively, the aforementioned ZFS and BTRFS are possible alternatives, and arguably better choices depending on who you ask.  However, due to the licensing (ZFS) and potential stability (BTRFS) issues, careful consideration is needed with those technologies.  Perhaps those considerations are topics for a future blog… Want to learn more?  Please reach out, we’re here to help.

infrastructure
A History of the Semiconductor Market AMD vs Intel - Procern Blog Featured Image

A History of the Semiconductor Market: AMD vs Intel

We are all winners when there is competition in the semiconductor industry A personal narrated history of the semiconductor market in my lifetime. For many years, I have been labeled by my friends and peers in the industry as being an AMD fanboy.  In truth, I am a fan of competition and a free market to drive innovation and keep prices affordable for everyone.  In this blog, we will dive into a little history about AMD.  They are very relevant today not only in the PC market but in the Datacenter as well. Some history When people think of computer processors, typically the brand Intel comes to mind.  They have been pioneers in the consumer and enterprise microprocessor industry for more than half a century.  The company was founded in 1968 in California by Gordon E. Moore a chemist and Robert Noyce who was a physicist. Throughout much of the twentieth century advancements in computer processing could most notably be attributed to the Intel corporation. In the mid-1970s, something interesting happened in the microprocessor market. Another American company AMD or Advanced Micro Devices, known at the time to provide licensed second-source manufacturing for Intel and others started to develop and sell their own unique microprocessor designs. This was the catalyst for consumers and OEMs to have a choice in the marketplace for whom provided their computer processors.  Until that time, Intel had solely provided or licensed others to make the processors for the IBM personal computer and other enterprise products.   1980s-1990s – AMD can compete … mostly Throughout the 80s and 90s, AMD was making licensed copies or clones of Intel processors with relative success. In 1996, AMD released its first in-house designed x86 processor. This competed with the Intel Pentium processors operating at 75-133Mhz.  They weren’t developing anything revolutionary. They were providing a cheaper alternative to Intel and driving innovation to some degree. This is the era in which I became an AMD customer.  At the time, I could not afford an Pentium based PC. I cobbled together components that I could afford to build my first computer. It was an AMD K6 266Mhz processor and had 16Mb of RAM.   It wasn’t much but I could do my schoolwork on it and it played a few games. 2000-2010 – Things are looking up for AMD In the early 2000s, AMD released their socketed Athlon processors. They were true game changers as they supported features like on-die L2 cache and double data rate RAM.  Later in 2003, they introduced to the market the first 64bit processor. Beating Intel to the punch and taking the innovation crown for a short period of time. In 2007, AMD introduced their first server class processor called the Opteron.  The Opteron was a very powerful and viable alternative to Intel’s XEON processor.  Budget conscious businesses had another option when choosing servers for their data centers which wouldn’t break the bank.   During this time frame, multi-core processors were introduced to the marketplace. AMD pursued this trend with positive results. “At this time in my life, I was working in IT. I finally had some money to build the computers I wanted to build.  Again, I chose AMD because of their price-to-performance ratio compared to Intel.  My thought process involved simple mathematics.  If I was able to achieve 90 percent of the performance of the Intel equivalent for 60 percent of the cost, then it seemed like a good choice. “ #lawofdiminishingreturns 2010-2015 – The Dark ages of which we do not speak During this time frame, AMD is handedly beaten by Intel by most legitimate metrics.  They did not innovate or develop new core architectures but chose to pile on the physical processing cores.  A strategy that failed them for the better part of a decade. The consumer products weren’t competitive. The server processors were relegated to budget options and entry level servers for small businesses. Although I owned many computers comprised of this architecture, it was a low point for me. I did lose some faith in the company.   My concerns centered around the lack of competition in the marketplace.  Monopolies are good for no one except Mr. Monopoly whoever that may be. 2016-Today – The enlightenment and salvation In 2016, AMD introduced the Ryzen or Zen microprocessor architecture to the world. This revolutionary microarchitecture displayed IPC (instruction per clock-cycle) gains of almost 52 percent compared to the previous Bulldozer architecture.  AMD was back in the game in a big way. In the consumer market, AMD sells processors that were faster than Intel offerings and twice the price.  In the enterprise, AMD has continued to increase core counts with this newest architecture. It has extinguished some of Intel’s market share in the Datacenter. TODAY AMD has released its second iteration of the Zen architecture called Zen 2. The enterprise offering is called 2nd Gen EPYC.  This architecture is truly displacing the Intel offering because it can compete on more than one level.  The IPC is on par and often exceeds the Intel equivalent. The core counts far exceed what Intel has by offering a 64 core/128 thread processor named EPYC 7742. This processor by itself could facilitate a virtual environment for most small to mid-sized businesses.   The processor is so revolutionary that virtualization/hypervisor companies are changing their licensing models in fear that a single socket host would undercut their profits from licensing. Who offers it? Hewlett Packard Enterprise, a company who has always been an advocate and an ally to AMD. They sell consumer devices outfitted with the newest AMD RYZEN processors based one ZEN 2 architecture. The servers utilize the newest 2nd Gen EPYC processors based on the same microarchitecture. These solutions offer better performance. Pricing is competitive to the point where they displace any Intel offering. As an unabashed AMD fanboy, I urge you to look at the metrics and decide for yourself.   In almost any computing workload, AMD is a competitive and cost effective option. Contact ProCern today for more

infrastructure
Aruba Networks – ClearPass Policy Management Platform - Procern Blog Featured Image

Aruba Networks – ClearPass Policy Management Platform

There was a time when IT was the gatekeeper of everything enterprise and ruled with a combination of strict policies and purpose-built technologies. There was no need for technologies like ClearPass. Those days are over.  Today, billions of Wi-Fi enabled smartphones and tablets are pouring into the workplace.  Users are armed with more than three devices or more, and each contains over 40 business and personal apps. Users have far more flexibility to connect their own smartphones and tablets to the network and download the apps of their choice. The expectation is that the mobility experience just works whether you are at home or in the office. The boundaries of IT’s domain now extend beyond the enterprise, and the expectation is that users can connect from anywhere. The goal is to provide anytime, anywhere connectivity without sacrificing security. How does IT maintain visibility and control? As a foundation, IT must have a firm understand of these three things: Where devices are being used How many devices are being used per user Which operating systems are supported. The next key is for IT to decide what happens when users and devices are not in compliance. The ClearPass Policy Management Platform The ClearPass Policy Management Platform from HPE Aruba Networking, takes a fresh approach to solving the mobility challenge – an approach that provides IT a simple way to build a foundation for enterprise-wide policies, strong security, and an enhanced user experience. From this single ClearPass policy and platform, contextual data is leveraged across the network to ensure that users and devices are granted appropriate access privileges, regardless of access method or device ownership. Mobility policies need to include user roles, devices types, available MDM data and certificate status, location, day-of-week, and time of day. ClearPass Benefits Policies and AAA services that support any wireless, wired, and VPN environment. Network privileges based on real-time contextual data-user roles, device types, location, and time-of-day. Built-in device profiling that identifies device types and attributes for everything that connects. Real-time troubleshooting tools that help solve connectivity and user issues quickly. Built-in integration that allows you to build a coordinated defense effort where everything works as one solution. Providing a seamless mobility experience for today’s mobile workforce has created a host of new challenges. ClearPass solves these challenges by providing a platform that delivers policy control, workflow automation, and visibility from a single cohesive solution.  By capturing and correlating real-time contextual data, ClearPass enables you to define policies that work in any environment: wireless, wired, or VPN.

infrastructure Modernization
You’re a Timex Watch in a Digital Age – ClearPass Policy Management Platform - Procern Blog Featured Image

You’re a Timex Watch in a Digital Age

“John, you’re a Timex watch in a digital age” snidely quips Thomas Gabriel, the brilliant, but maniacal, cyber-villain of Live Free or Die Hard, the fourth (and some would argue best) entry in the Die Hard film series. The “John” in question is, of course, none other than protagonist John McClane, the old-fashioned NYC Police Detective who becomes the reluctant hero in all five Die Hard films. In this particular installment, sophisticated hackers launch a targeted attack on the United States’ infrastructure, gaining control of all government controlled computers, and essentially hold the country for ransom. Can an aging cop with old school tactics hold his own against these advanced and intelligent cyber-terrorists? The Best Ways to Prepare for the Future  Well, first of all…of course he can. We ARE talking about a Bruce Willis movie here (and he always wins, right? Maybe not in The Sixth Sense…but I’ll save that for another blog). By this point in the series, John has already saved dozens of hostages, taken on countless henchmen, and happily exclaimed “yippe-ki-yay” all along the way. His encounters thus far, however, have strictly been with semi-traditional adversaries using semi-traditional means of operation. How then, you might ask, is a fifty-something cop who’s known for being “the wrong guy, in the wrong place, at the wrong time” going to keep pace with such a formidable, high-tech group of foes? By rolling with the punches! Duh! Though he definitely takes matters into his own hands and deals with the situation “the John McClane way”, he does so while also gaining a different perspective, aligning with the proper allies, and obtaining the needed resources that do not come to him naturally (in this case, seeking out help from a couple of virtuous hackers, Matt Farrell and ‘The Warlock’, who are willing to use their talents to aid his cause). Later in the film, after narrowly surviving an onslaught of lethal henchmen, John retorts to Thomas Gabriel, “I know I’m not as smart as you guys at all this computer stuff. But, hey… I’m still alive, ain’t I?” Although it may seem superfluous, the character arc that I have just laid out brings up an interesting point. Change. We see it in almost every aspect of our lives. Our neighborhoods, our bodies, the media, the stock market, our friends…you name it! The IT industry is no different. From the massive big-data servers of yesteryear to virtualization and advancements in cloud computing…infrastructure and disruptive technologies are constantly evolving and improving. It is how we adapt and move forward with this change that is important. Here at Zunesis, we strive to be the support system that helps customers implement and transition into this change. This is where the Die Hard analogy comes into play. Bruce Willis’ character from the film series is relatable, simply because he is often a victim of circumstance and forced to react when these seemingly insurmountable situations arise. His old-school style is appealing but it cannot be maintained forever. This becomes evident in Live Free or Die Hard when the villains John McClane is pitted against use tactics that are completely alien to him. Does he completely transform overnight? No. But he does reassess his environment and adapt accordingly. The same can be said of IT. The industry, technology, and processes are constantly changing and it is imperative that the customer base have the means to follow suit. ProCern has all the tools to make this happen. We partner with the top technology organizations and our Solution Architects have an average 20+ years of experience. Whether it is hyperconvergence, cloud migrations, hybrid IT, infrastructure refresh, health checks, assessments…you name it! ProCern is here to help. Nothing must change overnight. A complete 180 is not necessitated. Just a gradual discovery of the exciting new things on the IT horizon. We are here to help! In this scenario, ProCern is “The Warlock” (watch the movie and you will understand) and you, the customer, are John McClane. That’s right, YOU get to be John McClane! You’re the hero and we are the humble sidekick that helps you accomplish your goals! We are here for all your infrastructure solutions. Just let us know how we can help! In closing, there are many ways to evaluate your IT environment and ascertain the best ways to prepare for the future. Cost efficiency, security, consolidation, streamlining, and many others can all be addressed. Remember…you’re John McClane! And ProCern is here to aid you in your quest. Yippe-ki-yay!

infrastructure
Data Center Tiers

Data Center Tiers

Data Center Tier Requirements Data Center tiers are an efficient way to describe the infrastructure components being utilized at a business’s data center. Although a Tier 4 data center is more complex than a Tier 1 data center, this does not necessarily mean it is best-suited for an organization’s needs. While investing in Tier 1 infrastructure might leave a business open to risk, Tier 4 infrastructure might be an over-investment. Tier 4 Data Center:   A Tier 4 data center is built to be completely fault tolerant and has redundancy for every component. It has an expected uptime of 99.995% (26.3 minutes of downtime annually). Tier 3 Data Center:   A Tier 3 data center has multiple paths for power and cooling and systems in place to update and maintain it without taking it offline. It has an expected uptime of 99.982% (1.6 hours of downtime annually).  Tier 2 Data Center:   A Tier 2 data center has a single path for power and cooling and some redundant and backup components. It has an expected uptime of 99.741% (22 hours of downtime annually).  Tier 1 Data Center:   A Tier 1 data center has a single path for power and cooling and few, if any, redundant and backup components. It has an expected uptime of 99.671% (28.8 hours of downtime annually). Tier 4 Data Center Requirements Zero single points of failure – Redundancies for every process and data protection stream. No single outage or error can shut down the system. 995 % uptime per annum 2N+1 infrastructure (two times the amount required for operation plus a backup) – Fully redundant No more than 26.3 minutes of downtime per annum – Some downtime is allowed for optimized mechanical operations; however, this annual downtime does not affect user operations. 96-hour independent power outage protection – This power must not be connected to any outside source and is entirely proprietary. Some centers may have more. Tier 4 is considered an enterprise-level service. Tier 4 has approximately twice the site infrastructure of a Tier 3 location. If you need to host mission-critical servers, this is the level to use. Tier 4 data centers ensure the safety of your business regardless of any mechanical failures. You will have backup systems for cooling, power, data storage, and network links.  Data Center Security is compartmentalized with biometric access controls. Full fault tolerance keeps any problems from ever slowing down your business. This is true even if you host less critical servers in other tier levels. This tier also ensures optimized efficiency. Your servers are housed in the most physically advantageous locations. This drastically extends the life of your hardware. If the temperature and humidity are kept consistent, you gain a great deal of efficiency. Even the backups and dual power sources are treated like primaries. Tier 3 Data Center Requirements  982 % uptime per annum N+1 fault tolerance (two times the amount required for operation plus a backup) – Ability to undergo routine maintenance without a hiccup in operations. Unplanned maintenance and emergencies may cause problems that affect the system. Problems may potentially affect user-facing operations No more than 1.6 Hours of downtime per annum – Downtime is allowed for maintenance and overwhelming emergency issues. 72 hours independent power outage protection – at least three days of exclusive power. This power cannot connect to any outside source. Tier 3 provides most of the features of a Tier 4 infrastructure without some of the elite protections.  For instance, the data center has the advantage of dual power sources and redundant cooling and the network streams are fully backed up.   Guaranteed uptime is slightly less than Tier 4, and the system is not entirely fault-tolerant. Tier 2 Data Center Requirements 741 % uptime per annum No more than 22 Hours of downtime per annum – There is a considerable jump between levels 2 and 3 regarding downtime. Redundancy is one of the primary reasons for this. Partial cooling and multiple power redundancies – A Tier 2 Data center will not have redundancy in all areas of operation. The most critical aspects of its mechanical structure receive priority. These two aspects are power and cooling distribution. Redundancy in these areas is only partial. No part of the system is fault tolerant. The utility of a Tier 2 Data center is fundamentally different. If your business prioritizes redundant capacity components, then you may want to look at this level of infrastructure. Tier 1 Data Center Requirements 671 % uptime per annum No more than 28.8 Hours of downtime per annum – There is a considerable jump between levels 2 and 3 regarding downtime. Redundancy is one of the primary reasons for this. Zero redundancies – A Tier 1 Data center will not have redundancy in any part of its operations. Facilities do not have any redundancy guarantees within its power and cooling certification process. The use of the Tier I infrastructure designed for companies with a need for a colocation data center. This is the most budget conscious option for a business.  The infrastructure consists of a single uplink, a single path for power, and non-redundant servers. The tier classification system is another way of assessing redundancy and uptime reliability as you determine your organizations’ data center needs. Contact ProCern to talk about your data center needs.

infrastructure
Hybrid clouds

Hybrid IT Choice

Hybrid IT or Public Cloud? Recently, I had the privilege of attending a round table to talk about cloud computing and Hybrid IT. The discussion was to share information on where IT professionals were on their cloud journey within their organization and why companies make the jump to hybrid or public cloud. The attendees were mostly Director and “C” level executives with representation from almost every spectrum of business. Every stage of the cloud journey was represented. There were companies that have everything on premise and companies with 100% of their IT in the cloud. There were very small companies to some of the largest companies in Colorado, as well as local companies and international companies. It was a great cross representation and lead to some interesting conversations. What I found interesting was why some of these organizations had moved to the cloud and why some had not moved to the cloud. The discussion also arose on when the idea of Hybrid IT made the most sense for a company. There were a few that had a “Cloud First” approach to IT, but most people in the room agreed that Hybrid IT made the most sense. It really depended on the market and the size of the company. For example, if most of the IT requirements were remote (stores, etc.), the Cloud approach seemed to be prevalent. Larger companies and companies with high security requirements tended to lean more to the on premise or Hybrid approach. Almost everyone agreed that moving an application (Software as a Service) or setting up a DR site in the cloud is a good way to gain exposure into cloud computing. This is nothing new and has been going on for some time. Hybrid IT Option Hybrid IT is an approach to enterprise computing in which an organization provides and manages some information technology (IT) resources in-house but uses cloud-based services for others. Many customers have applications that will not or should not move to the cloud. The easy example is mainframe and high-end Unix systems that are unlikely going to move to the cloud. At least until the applications are replaced. Some of the attendees at this event were hesitant to move to the public cloud because of security and privacy concerns. While others had compliance regulations they must meet. These are valid concerns, and one the hybrid IT can help solve. While privacy and security should be of utmost concern, businesses still need to innovate. The Hybrid IT model can address both concerns. Enterprises that deal with confidential data need the flexibility the Public cloud provides. They have the ability to create a multi-tenant cloud within the hybrid model. This will segregate applications and resources from each other and can be further isolated with VLANs and additional encryption methods. Many businesses have found success using Hybrid IT models that allow them to keep full control over sensitive data, such as customer data or internal communications. They can keep data stored on-premise and readily accessible, while relegating less-sensitive data and workloads in the cloud. The added benefit of maintaining a hybrid solution with an on-premise data center is for disaster recovery and keeping private data out of the public pool. Hybrid IT is the ideal use of public and private resources that maximize cost-savings and productivity, and to minimize latency, privacy and security concerns.

infrastructure
lock and key: infrastructure hardening

Infrastructure Hardening

Securing or Hardening Securing or Hardening aims to protect and secure your IT infrastructure against cyberattacks by reducing the attack surface. The attack surface is all the different points where an attacker can to attempt to gain access or damage the equipment.  This blog is focused on securing Servers and storage. The goal of server hardening is to remove all unnecessary components and access in order to maximize security.  This is easiest when a server performs a single function. For example, a web server needs to be visible to the internet whereas a database server needs to be more protected. It will often be visible only to the web servers or application servers and not directly connected to the internet. If a single server is providing multiple functions, there may be a conflict of security requirements.  It is best practice not to mix application functions on the same server. Implementing Hardening Policies The information below provides a starting point for implementing hardening policies.  Some of these only apply to the servers, but others apply to all devices on the network (Servers, Storage, Networking). All Devices: Change default credentials and remove (or disable) default accounts – before connecting the server to the network. Disable guest accounts, setup accounts and vendor accounts (Vendor accounts can be enabled when necessary). Install security patches and firmware updates on a scheduled basis. My recommendation is to review devices firmware, virtualization layer software, and operating systems a minimum of every 6 months.  If possible, review them every quarter. If possible, sign up for service update notifications from all vendors. You will be notified of critical updates.  Depending on the update, Critical Security updates may require immediate implementation. Develop a patch/firmware management process that includes what gets updated, when it gets updated, outage window required, can it be automated, process for patching/firmware upgrade, etc. Some devices may be updated quarterly, others monthly. Accurate time keeping is essential for some security protocols to work effectively. Configure NTP servers to ensure all servers, storage and network devices share the same timestamp.  It is much harder to investigate security or operational issues if the logs on each device are not synchronized. Ensure all devices are located in a physically secured location and restricted to approved staff only. Review and disable access for anyone that has left or changed roles. Review user and administrator level access to all devices. Ensure all default userids and passwords have been changed. Remove all users that are not on the approved list.  If possible, use roles-based access using Active Directory or the equivalent. For connection to all devices, use Secure Shell Protocol (or SSH) when possible. This enables you to make a secure connection to your network services over an unsecured network. Avoid using FTP, Telnet and rsh commands.  Use a secured protocol. Servers: Turn off services that are not required – this includes scripts, drivers, features, subsystems, file systems, and unnecessary web services. Remove all unnecessary software. On Windows systems only activate the Roles and Features required for that host to function correctly. On Linux systems remove packages that are not required and disable daemons that are not required. Remote Access (Windows RDP) is one of the most attacked subsystems on the internet – ideally only make it available within a VPN and not published directly to the internet. For Linux systems, remote access is usually using SSH.  Configure SSH to whitelist permitted IP addresses that can connect and disable remote login for root. Configure operating system and application logging so that logs are captured and preserved.  Consider an SIEM Solution to centralize and manage the event logs from across your network. Review Administrator Access to host operating systems. Administrator accounts should only be used when required by approved personnel. Set password settings to require “Strong and Unique” passwords. Force password changes periodically according to internal security practices (usually 30 to 90 days). Configure account lockout policies. Lockout user accounts after failed attempts. Consider using Multi-Factor Authentication (MFA) if feasible to improve the level of security. Review backup policies to ensure all servers are being backed up correctly according to company retention policies. Periodically test the backup to be sure recovery is possible. Review monitoring requirements and be aware of any activity on each system. Set up custom admin accounts. They can be an Active Directory (AD) account or a local account in the administrators group. Limit security context on accounts used for running services. By default, these are Network Service, Local System, or Local Service accounts. For sensitive application and user services, set up accounts for each service and limit privileges to the minimum required for each service. This limits the ability for privilege escalation and lateral movement. For Linux systems, use Secure Shell Protocol (or SSH) when possible. This enables you to make a secure connection to your network services over an unsecured network. Use a secured protocol. Enable UEFI Secure Boot will further ensure only trusted binaries are loaded during boot. If not in use, disable the IPv6 protocol to decrease the attack surface. Keep partitions separated can help decrease the radius of any attack. Separate the boot partition from the user data and application data will help protect your data. Contact ProCern today if you would like more information on hardening your infrastructure.