Filter: Data Center Technology

A Guide to Web Hosting- Top 5 Questions Answered

A professional website is a necessity for business today. Most people prefer using the Internet to carry out business and also to determine the legitimacy of a company. All websites need a home. A web hosting is that home and as service provider is crucial in creating and building your online presence, keeping your site accessible to visitors worldwide and in ensuring it is secure from cyber-attacks. There are many web hosting vendors, but to differentiate between the services of each, it is essential for businesses to understand basic aspects of web hosting. Here we answer the top 5 frequently asked questions on website hosting, as a guide for beginners.

1. What is Website Hosting?

Simply put, when you purchase web hosting, you are buying storage space for your website online. Web pages have text, images, audios, videos, PDF files and other content that must be stored on secured web servers owned and managed by the web hosting company, so that it is available to the public worldwide. The company registering the space has to establish a domain name and lease a block of space on a web server to upload and store their webpages for the website to become hosted and be seen by anyone with an internet connection.

A domain name is the server address that points to the hosting account on the rented server, in which the data for the website is stored. For example, www.Lunarpages.com is our domain name.  The server delegates files to the web browser making the request. Not all servers are the same and server features offered will vary depending on the web host provider.

2. What Services are provided by Web Hosting Companies?

Web hosting providers don’t just sell space on their servers.  There are other essential services needed for website building and hosting, also provided by these companies, such as domain name registration, email hosting services, content management system, SSL certificates, website builder with free templates, set ups for forums, guestbooks, social networking integration & ecommerce features, and choice of operating system including Windows, Linux and Java hosting.  Web server hosting can come with different infrastructures, such as dedicated and even cloud servers, for clients to choose from according to their specific needs. A good web hosting service will have advanced service packages available, with 24/7 technical support, database storage, email services, or scripting support.

3. What Features Should a Basic Web Hosting Plan Provide?

Look for these fundamental features in website hosting plans:

Disk Space
Disk space is essentially the storage space provided by your web hosting provider to store your web files including text, images, animations, files, video, audio, etc. Few websites actually need more than 100-500 megabytes but good web host providers will have packages with varying amounts of space to satisfy the needs of all sizes of sites; from smaller personal websites to larger company websites. Web hosts now offer numbers like 1GB to 2GB, though you may rarely need it. If error and access logs are provided by your web host, these too shall be hosted on your web space. As your site grows and the number of visitors increases, you can upgrade to more storage space as needed.

Bandwidth
Bandwidth refers to the total data that a website can transfer over a period of time. It determines the amount of traffic allowed on your site during a period of time and the speed of your website. More bandwidth equates to more speed. The less bandwidth your site has, the slower the load time. There are web hosts that provide unlimited bandwidth whereas some others charge different prices based upon the amount you use per month. A high traffic website with heavier pages containing many images, videos etc. will need higher storage and greater bandwidth. Generally, 25-50 GB is sufficient for any average new website.

Uptime
Uptime refers to the percentage of time that a hosting server stays up and running. It is absolutely essential for businesses, especially for those operating an e-commerce site or those transacting through their site, to have a 24×7 web host operating on a powerful server and robust network connections, in order to avoid losses from any downtime.   Having 99.99% uptime will ensure that your website goes down only for about 8 hours per year and anything less is unacceptable.

Site Backup
There may be times when a site crashes either because of hacker activity or from severe hard disk failure. Your web host should have facilities for regular site backups so that your site can be restored easily and quickly. Make sure you set up automated backups when you set up your hosting plan to avoid any costly losses.

Programming Services
You should be able to create web pages with programming languages including HTML, ASP as well as databases. The best web hosts also provide PHP language and mySQL database.

Customer Service
All businesses looking for webhosting services should pay special attention to the customer support options provided by a web hosting provider because it can be your server’s lifeline. We recommend that you look for 24/7 technical support. Customer support through live chat, phone and email etc. prove to be extremely useful in the event of any urgent technical or other website related problems. Your web hosting company should also have comprehensive documentation for problem solving by oneself, if necessary. Reaching out to web hosts should be easy, with quick resolution for any type of problem you may have.

4. What are the Types of Web Server Hosting?

A good web hosting company offers different types of web hosting options catering to different business needs. As your company and site grows, you can migrate to a different hosting plan.

Shared Server Hosting
For many smaller businesses a reliable web presence is essential but they often do not have the budget for expensive server systems and infrastructure, so it works better to share resources. Shared hosting comes at the cheapest price and it merely means you share a single webserver with several webmasters.  In shared hosting services, an individual server caters to different hosting accounts, each with an allocated amount of storage and bandwidth. This type of hosting plan may be suitable for plain static websites with limited interactive features, small ecommerce sites, or a personal website, all of which require cost effective hosting solutions. Because of the shared resources in a shared hosting system, any misuse by individual users can affect everybody else on the same server. There is less risk to your hosting account if the pool of resources is larger and there are more redundancies in place to mitigate such risk.

Dedicated Server Hosting
Dedicated hosting allows individuals and business to lease a wholly dedicated server and connectivity for your website from a web host, instead of sharing a virtual server.  Unlike shared server hosting you will remain unaffected by other websites.  You can select your operating system, customize software, and personalize settings for all your multimedia and e-commerce requirements. With dedicated server hosting, you can get advanced server control, without needing to make huge up front investments in purchasing equipment and space to house your own server. Dedicated servers are more popular with high traffic websites, e-commerce sites or ones streaming video content on their site.

Cloud Server Hosting
With advances in cloud computing technologies, cloud hosting has emerged as a cost effective web hosting solution. Unlike shared or dedicated server hosting, cloud hosting is done through multiple servers that act as a single system by inter connecting with each other.  Cloud server hosting is more stable because of load balancing, higher security features of multiple servers, non-reliance on a single server as a point of failure, and also the facility to increase or decrease server resources according to your needs.  Cloud hosting services are charged on the basis of usage and it is easy to scale your resources up or down, according to your traffic needs.

5. What is the Difference between Linux and Windows Web Hosting?

With every web hosting plan, you have to choose the operating system on which the server will run. End users should be aware of the features of both Windows and Linux systems for web hosting, to pick the one best suited for your site.  Windows and Linux both allow FTP access to your files but only Linux Hosting permits telnet or ssh access, which enables opening a window on the web server to edit them.  A dynamic website will need a database too. Access and MySQL are generally the preferred database choices. Linux servers generally offer mySQL; however, it can run on both Linux and Windows servers. Access, though, is only for Windows operating systems. Security is a key issue for any website and the general perception among webmasters is that Linux is more secure than Windows web server. Good web hosts will ensure your server remains secure whatever the OS may be.

You can select Linux or Windows web hosting according to your specific needs. Linux web hosting has advanced features for better web designing. Linux hosting is the optimal platform for PHP, Perl or Python scripting and for applications like WordPress, Drupal, Joomla and other popular CMSs. Linux web hosting suits more interactive sites with inquiry forms, online purchasing and e-commerce features. Windows Hosting is a must if you want to develop your site using Microsoft’s ASP or ASP.NET, and also if you want to implement a MSSQL database.  Interestingly, Microsoft has just announced that users will get the choice of porting its windows based SQL server to a server running Linux. If you would rather experiment with both Linux and Windows hosting, you can opt for Java Hosting, where the same underlying code works for both Windows as well as Linux servers. There are no portability issues and it can be used by anyone of any skill level, from beginners to technical experts.

Web Hosting Made Easy
This covers just a few of the most important and basic questions regarding web hosting, but there are many more questions that still remain unanswered.  Our team at Lunarpages would be happy to help you with any questions you may have via Live Chat. Discover the different Web Hosting Plans we have to offer.

What Is A SSL Certificate?

Many of you may have noticed that some website URLs begin with https prefix, display a padlock to the left of the URL, have a trust seal and sometimes show a green address bar. These are all visual signs of the trusted SSL certificate.  SSL is short for Secure Sockets Layer, which is a global standard security technology for encrypting communications between a website and the visitor’s browser. A Digital Certificate, also called an SSL certificate, is installed on the web server to create a secure connection which ensures that all data passed between the two intended parties remain private and secure. In essence, SSL encryption serves a dual purpose:

  1. It authenticates the identity of the website as trustworthy, which assures visitors that they’re on a legitimate site that has not been tampered with by hackers or identity thieves.
  2. It encrypts the data that is being transmitted, to prevent hackers from stealing or tampering with private information such as credit card numbers, passwords, names, addresses and emails.

Do I Need an SSL Certificate?
An individual or business will need an SSL certificate, if they use a website to:

  • Sell products or services,
  • Accept credit card payments online through a merchant account, or
  • Collect confidential information such as logins and passwords, personal data (e.g., name, address, social security number, birth dates), medical records & proprietary information, etc.

Credit card companies such as Visa, MasterCard, Discover Network, American Express, and Diners Club International made it mandatory for a website to be payment card industry (PCI) compliant in order to accept credit card payments. It is therefore essential to have an SSL certificate if you receive, transmit and process credit card information.

Customers are becoming much savvier and prefer not to conduct business online or give away their credit card information unless they are assured of the legitimacy of a business.  Visitors look for the reliable symbol of an SSL Certificate to see if it is safe to complete secure transactions on the website.

Types of SSL Certificates

There are different levels of SSL certificates.  SSL certificates can be issued based on:

  1. The number of domain names or subdomains owned, including:
    • Single – The certificate enables SSL encryption on one fully-qualified domain name or subdomain name
    • Wildcard – A Wildcard SSL certificate secures one domain name and unlimited subdomains (subdomains should have the exact same second level domain name) using a single certificate.
    • Multi-Domain –One multi domain certificate can secure up to 210 domains, with a mix of different second level domains like domain.com, www.domain.com, domain.net etc.
  2. The level of validation needed:
    • Domain Validation – provides private SSL certificates with basic encryption and verification of the ownership of the domain name registration. A domain validation certificate ties in with your unique domain name and it helps customers feel comfortable transacting with you online.
    • Organization Validation – certifies your company’s identity in addition to basic encryption and verification of ownership of the domain name registration. This requires documentation and details of the owner (e.g., name and address) for authentication. This SSL validation certificate displays your site in a browser differently to show your business legitimacy.
    • Extended Validation (EV) – EV certificates provide the highest degree of secure connections because the certificate is issued after checks and verification of ownership of the domain name registration, business authentication, and also the physical and operational existence of the business. This type of certificate is the highest standard of assurance for visitors to prevent fraud and the green HTTPS address bar is exclusive to EV SSL certificates.

SSL Certificate Providers

The most essential aspect of an SSL certificate is where it comes from, specifically the Certificate Authorities (CAs) issuing it. CAs are organizations that verify and authenticate the identity and legitimacy of the purchasing company requesting a certificate. The CA authenticates the applicant credentials using WHOIS database, Dun & Bradstreet data, photo ID’s issued by government organization or other credible sources to issue certificates and retain status information on them. Choosing the right SSL provider is of utmost importance because web browsers normally store a cached list of trusted CAs on file. The browser generally warns the user that the website may not be trustworthy when the digital certificate is signed by somebody not featured on the ‘approved’ list. When evaluating SSL Certificate providers, you also need to consider the data encryption level, web browser compatibility, and price.
You can purchase digital certificates from a domain name registrar or website hosting provider. If your site is hosted on a VPS or Dedicated Server, then it requires a dedicated IP address for each private SSL.

Getting Started

Lunarpages provides two types of SSL certificates.  A shared and a personal certificate. 

The Shared Certificate is included with all Basic and Business plans. Shared SSL works only with html, and cgi/perl based documents/scripts/carts because of security restrictions on the servers. Lunarpages do not offer a Shared SSL Certificate with the Windows plans.

If you require SSL for PHP, ASP, JSP pages you will need to purchase a personal certificate and dedicated IP. With a personal certificate your link would appear as https://yourdomain.com.
Alternatively you can also purchase your certificate from another Certificate Authority and have Lunarpages install it.

Choose your SSL Certificate according to your website security needs and the volume of online transactions your website handles. In the end make sure your SSL Certificate is compatible with almost all browsers worldwide.

Bursting Some Popular Cloud Myths

The word “Cloud” still causes a lot of confusion among people, many of whom are left wondering what it actually is. When opting for cloud hosting, businesses are renting virtual server space rather than renting or purchasing physical servers. When virtual server space is rented, it is often paid for by the hour, depending on the capacity required at any particular time. These virtualized dedicated cloud servers have gained in popularity globally, because of their enormous shared computing power.  Even core products from Microsoft to Adobe such as Office 365 and Creative Cloud use data that’s stored on remote servers. There are, however, many myths about cloud hosting that seem to worry customers’ minds when considering a cloud-hosting provider. Let’s burst some myths to get to the truth about cloud server hosting.

Myths and Truths About Cloud Server Hosting

Myth #1: Cloud Hosting is Not Secure
Fact: Cloud hosting providers are continuously improving on their best practices and compliance levels for securing critical data and applications. Nonetheless, it comes down to choosing a leading cloud hosting company with good credentials and service level agreements. The company you choose should also offer the highest levels of security with fully managed firewall protection. Cloud hosting environments ensure 100% uptime with an SOC2/SSAE16 data center, high availability server architecture with multiple servers, 256-bit encryption, automatic off site backups, firewalls, routers, uninterrupted power supply, load balancers, switches mirror disks, RAID implementation, and 24/7 onsite monitoring. Additionally, software updates, including security patches, are applied to all customers simultaneously in the multitenant system. Most hosts treat cloud security very seriously and implement the latest technology and resources to protect the cloud environment, because if the cloud were to be proven unsafe then cloud companies would lose millions in sales.  Security in the cloud, even in large cloud environments, has so far been stellar. There have been very few security breaches in the public cloud, as compared to on-premises data center environments.

Myth #2: Cloud Services Are Complicated
Fact: Cloud hosting may seem confusing with its many variations of public cloud, private cloud, hybrid cloud and even community cloud, but cloud servers are no more complex than dedicated servers or VPS. Cloud hosting actually simplifies the job of an IT manager or CTO because of its easy setup, instant provisioning through an online control panel, utilization on-demand and customization. The online control panel in cloud storage handles all the tough work; making cloud storage as easy as dragging a file to an icon.

Myth #3:  Cloud Hosting Is Expensive
Fact: Cloud hosting helps businesses save considerable financial resources and offers flexibility and adaptability for both the short and long term. It is a much cheaper alternative to shared or dedicated servers, though cost comparison may prove to be tricky. With cloud hosting you only have to pay for data storage resources you use, so it works out much cheaper than other hosting services. The cost for what you use on the cloud depends on a few factors.  These include the number of users, data size, customized backups, applications used and exchange services.  Cloud computing replaces the need for installing local servers, network equipment, power conditioning, software and antivirus software, backup solutions, dedicated server rooms, along with reducing the cost of IT staff, user support and maintenance.

Myth #4 – Cloud Performance Is Not Reliable 
Fact: In the early days of cloud computing, there may have been some performance issues. However, these problems have been attended to by the leading cloud service providers who offer unique and work-specific solutions for high powered & high speed storage with guaranteed IOPS, along with other improvements. Cloud providers have made their systems resilient to avoid outages. No system is perfect and the cloud can fail too, but the fact is that those failures are fewer and far between as compared to other alternatives. The cloud environment can be engineered to adapt to strenuous workloads and high availability requirements that avoid any performance or failure issues.

Myth #5 – There Is Only One Cloud
Fact: There are hosting providers offering cloud services from the small business to the enterprise level and there is actually more than one type of cloud—a Public Cloud, a Private Cloud and a Hybrid Cloud. A Public Cloud shares network infrastructure which is accessible from an off-site Internet source. While it is easier to share files on a Public Cloud, a Private Cloud has advanced security features and guaranteed high quality maintenance on software and infrastructure. The third type of cloud is a Hybrid Cloud, which combines aspects of a Private and a Public Cloud. For example, businesses can keep their data and applications for QuickBooks or financial software hosting on a Private Cloud and less sensitive documents can be stored on a Public Cloud.

The Bottom Line
When considering cloud hosting, it all comes down to finding a hosting provider with a proven track record.  Try looking up comparison charts to find hosts with the most resources, an appropriate array of hosting products and excellent customer support to win your business. Cloud services have moved from being a second thought to being top of mind for businesses of all sizes. Amazon and Salesforce are just a couple of companies that are shining examples of the utility of Saas platforms in the cloud revolution. But cloud computing is not just for large enterprises, it offers greater IT efficiency and capabilities for all businesses from small to medium-sized.  Smart businesses should be ready to switch to the cloud in the future to leverage cloud technology or risk being left behind by their competitors who are already taking advantage of the value and benefits of cloud computing.

MySQL vs. NoSQL Databases: To Relate or Not to Relate?

When it comes to databases, you really only have two choices: relational databases or nonrelational databases.

For years, relational databases, such as SQL, have ruled the data airwaves, but nonrelational databases, such as NoSQL, have recently started to gain popularity.

These two types of databases are very distinct. Choosing the right one for your data-driven projects or applications requires careful consideration of resources and business objectives.

Relational databases use something called SQL (Structured Query Language) to extract value and results from the stored data. The data is contained within tables, and each table contains rows of data that fit a predefined format. Some of the most popular SQL databases include Oracle, Microsoft SQL Server, Postgres and MySQL.

Recently, nonrelational databases, which don’t use the query language or the structure found in SQL databases, have emerged and use the appropriate moniker of NoSQL (i.e., “not SQL”). These databases don’t use tables, and they have a much looser structure than traditional SQL databases. Popular NoSQL databases include MongoDB, CouchDB, BigTable, RavenDB and Cassandra.

The challenge most business IT leaders face is deciding which database type best suits their organizations or applications.

Why MySQL Might Make Sense

MySQL was introduced in 1995 as an open-source relational database. It continues to be one of the most popular database choices and is used by numerous companies and web applications, including Zappos, MTV Networks and Facebook.

MySQL is extremely good for structured data. Each table has a primary key, which allows for relationships to be made between tables. Using SQL, the database’s query language, data can easily be searched, added and deleted with a variety of well-documented commands. MySQL is good for fast inserts of data and complex joins between different data sets and can return search results in a structured manner.

MySQL is best used within heavy-duty transactional applications because it is quite stable and maintains a better integrity of data. SQL databases follow the computer-science database ACID model: atomicity, consistency, isolation and durability.

The biggest advantage of MySQL is its robust support community. This is similar to WordPress, which has gained dominance in large part because it, too, is open-sourced and backed by a large support community.

Why NoSQL Might Make Sense

NoSQL is relatively new to the database market. It is designed for scalability and performance. There is no set size for the data that NoSQL databases contain, unlike MySQL, which has predefined field sizes. Given that it is flatter in structure, NoSQL is also much more elastic and can quickly grow in size.

Instead of the traditional “joins” used by relational databases, NoSQL follows a key-value structure, which means that queries can be used to find linked data. NoSQL databases are nonrelational and often document-oriented databases. They also use key-value pairs and have wide-column stores (versus the table-based structure of SQL databases).

NoSQL proponents say the databases are easier to set up because they don’t require time-intensive and detailed data models. Also, NoSQL has become the database of choice for Big Data implementations. When used with MapReduce, NoSQL becomes more powerful as well as cost effective, because large data sets can be processed and analyzed across clusters of computers or nodes.

However, because NoSQL is new, it doesn’t have the community support that MySQL does. Similarly, there aren’t as many reporting tools available, which can add additional costs to a NoSQL implementation, since organizations will have to purchase reporting solutions separately. There’s also a resource problem due to a scarcity of database administrators who are familiar with NoSQL.

The Question of Scalability

One of the biggest differences between MySQL and NoSQL is how data scales in each environment. Scalability is important because data sets tend to grow tremendously over time. And more companies are capturing different types of data and even pulling in legacy data stores. As more processing requirements grow, you need to have infrastructure that can handle volumes of data and increased processing demands.

With MySQL databases, scalability is vertical. That means if you want to give your MySQL database more oomph, you need to give it more power in the form of more RAM or CPUs. This can be costly, and there is a limit to the amount of RAM and CPUs you can add.

Conversely, with NoSQL databases, you scale horizontally. Instead of building more powerful boxes, you simply add more servers to the data cluster. With Big Data and MapReduce, building up the number of nodes adds to the processing power. One of the key advantages of a NoSQL environment is that servers can be added using less expensive hardware.

So Which Database Should You Choose?

If you work with lots of structured data and need to have ample support for your data implementations, then MySQL is the obvious choice. You will find a vast community available to help you, as well as a plethora of tools at your disposal.

If, however, you have largely unstructured data, which tends to grow exponentially, and most of your data transactions are mainly retrieval or “append” operations, you may want to consider a NoSQL approach. NoSQL is good for write-once (static) and read-many (transaction) data implementations.

Before you make any decision, be sure you talk with your application developers so you fully understand whether you need structured or unstructured data sets. Research the cost implications, and hire a consultant, partner or database analyst that understands your requirements. Do not simply make a blind decision on the type of database to use, because converting from one type to another can be a gargantuan task.

[image: Anatoliy Babiy/iStock/ThinkStockPhotos]

Off to the Races: Big Data and the Need for Speed

Time is money, especially if you’re a Lucas Oil Off Road racer: First place could mean $2,100. But standing in the winner’s circle doesn’t come easy—it takes guts and grit, backed by sophisticated timing systems able to sort out even the closest race. When one hundredth of a second means the difference between victory and defeat, mistakes can’t happen; timing data must be both precise and delivered on demand.

Enterprises share a similar concern when it comes to big data. Accurate and timely analysis gives companies an edge over their competition, helping them understand when it’s time to stomp on the gas and when they need to take it slow. Bottom line? You can’t afford to lose this race.

Race Day Analytics

It’s easy to write off data analytics tools as hype; the costs of cloud computing, as-a-service deployments and bring-your-own-device (BYOD) adoption make it tempting to avoid a clear-cut data strategy. Many industries—notably healthcare—actively resist the pull of big data.

The problem? Using legacy systems is like timing a Formula 1 race with a stopwatch and human sight— there’s a better way, and it’s paying dividends. Inis Motorsport, for example, supplies the live timing technology used in all Lucas Oil Off Road races and is able to push results in real time across PCs, mobile devices and via Web browsers.

The system uses a series of transponders, which monitor each car as it crosses the finish line providing lap, best and gap time data instantly. For enterprises? Sophisticated data analytics tools now on the market are able to churn through vast amounts of structured and unstructured information, uncovering patterns and trends as they go. And according to a recent Forbes article, cutting-edge solutions work in real time.

V for Victory?

Beyond intelligent systems, companies need skill. Data to mine isn’t enough on its own—businesses need to use the right data sets, ask the right questions and be prepared to act on results immediately. But the move away from gut feelings and corporate experience to hard data can be daunting. As described by IBM, however, it’s possible to evaluate the strength of your data using what Big Blue calls the “4Vs”: Volume, velocity, variety and veracity.

Sound complicated? Consider the example of a race track with hundreds of cars zooming past the finish line. This is the enterprise server; the cars are bits of collected data. To offer value, there must be a certain volume of cars on the track: One or two don’t make a race. It must also be possible to analyze data with a certain velocity—as each car crosses the line, times must be posted instantly.

In addition, data needs veracity, which refers to the logical consistency both of data sets and results. Sets must exhibit at least some commonalty; if the race is made up of two funny cars, three monster trucks and a vintage sedan, the results won’t be usable. Finally, results reporting must be reliable—if widely different lap times are reported for similar vehicles, something isn’t right.

The Final Data Lap

Variety is where many enterprises get bogged down. Instead of a closed track, the flow of big data is like continually adding new cars to the course until the ground is fairly littered with wreckage.

Consumer data, internal data, Web analytics data—all are part of a larger whole, but can make the total amount of information available to a company seem impossibly huge. Your best bet? Start small. Find a web host that supports popular analysis tools. When it’s time to go bigger, consider an SaaS deployment with a narrow focus, followed by a custom-built or in house solution.

Don’t get left behind; real-time analytics can help make sure you never miss the checkered flag.

Meet the Industrial Internet Consortium

Five major corporations — AT&T, Cisco, GE, IBM and Intel — have banded together to form a not-for-profit technology ecosystem known as the Industrial Internet Consortium (IIC).

According to a recent Cisco press release, the IIC is an “open membership group focused on breaking down the barriers of technology silos to support better access to big data with improved integration of the physical and digital worlds.”

In other words, this joint effort hopes to pave the way for better connections between virtual resources and physical devices. But what does that really mean for enterprise IT and the Internet at large?

The Foundation of the Internet of Everything

You can call it Machine-to-Machine technology, the Internet of Things (IoT) or the Industrial Internet — they amount to the same thing: physical devices networked through embedded technology that both detects internal states and interacts with the external environment. Chris Neiger of The Motley Fool simplifies the process: “Think of the Internet of Things as a way for everyday objects to talk to each other, and to talk to you.”

For consumers, this could take the form of an Internet-enabled coffeepot or toaster oven. But how can enterprise benefit?

A recent Forbes article took a look at the Rail Splitter Wind Farm, which contains 67 IoT-enabled wind turbines. Covered in tiny sensors, these turbines relay myriad data points to a cloud-based network every second, allowing engineers to make subtle speed or pitch adjustments for maximum efficiency. New technologies allow the turbines to “speak” to one another — if a turbine’s anemometer (used to measure wind speed) fails, it can communicate with nearby turbines to make up the sudden gap in knowledge.

Getting Organized Around the Internet of Things

According to Cisco Canada CTO Jim Seifert, the Internet of Things will drive $14.4 trillion worth of economic activity over the next 10 years. But it’s not all smooth sailing. Guido Jouret, vice president and general manager of Cisco’s Internet of Things business group, says “ninety-nine percent of everything is still unconnected.”

There’s also the issue of Big Data: Every physical device generates massive amounts of data, which must be collected and verified and then properly interpreted.

The goal of the IIC, therefore, is to organize and standardize the way companies collect and share IoT data. As noted by the Wall Street Journal, it’s telling that the IIC included the word “industrial” in its name, since this indicates a focus on markets such as manufacturing, oil and gas exploration, healthcare and transportation. Why? Because these areas often have hardware and software products that work well together but don’t play nicely with products from other companies.

A recent Silicon Angle article offers a real-world take on this problem, arguing that if the IIC had started its work five years ago and developed a set of internationally recognized standards, missing Malaysian Airlines Flight 370 could have been easily located.

Contributing to the Standardization Efforts

Mike Troiano, vice president of advanced mobility solutions for AT&T, says the IIC builds on his company’s vision of “enabling people to operate anything remotely, anytime and virtually anywhere.” But don’t expect this kind of revolution overnight, since the consortium wants to standardize everything from Internet protocols to data storage to power level metrics. Membership is also open to any company with an interest in IoT, meaning standards will ultimately be reflective of broad industry trends but will take time to hammer out.

In the meantime, it’s possible for enterprise to benefit from the Internet of Everything. Intel advises companies to identify the top business problem they want to solve and then determine what kinds of connections provide the best results. In many cases, the addition of remote data-collection tools can provide a significant boost to real-time and predictive analytics, along with providing room for future system scalability.

The IIC is worth watching because it aims to provide a framework for industrial IoT applications along with open discussion. If successful, the joint effort should produce a set of unified, transparent standards within the next few years.

[image: slavemotion/iStock/ThinkStockPhotos]

Why Dedicated Hosting Is Still Essential to the Enterprise

Is dedicated hosting at the end of its life cycle? With public clouds on the rise and “as a service” versions of everything from storage to networking to disaster recovery now available, it’s tempting for companies to phase out dedicated servers in favor of cloudcentric alternatives.

But according to a Microsoft study, dedicated servers account for 48 percent of hosted infrastructure spending and will continue to top 40 percent over the next two years; in other words, dedicated hosting is still essential to the enterprise. Here’s why.

Are Dedicated Servers Heading Toward a Dead End?

The argument for cloud over dedicated services typically centers on the concepts of flexibility and scalability. A recent Tech Radar piece makes this argument: Since dedicated servers can’t scale on the fly, and data loads can’t be moved from server to server without significant downtime, cloud options may be the better choice for enterprise.

What’s more, reliability is often improved because, in the event of a power outage or a disaster, company data can be automatically migrated to a new server. Cost also makes its way into the dedicated-versus-cloud discussion: Because cloud resources spin up on demand, enterprises only pay for what they actually use.

Big companies like Microsoft are willing to take a chance on the cloud; Data Center Knowledge reports that the Redmond giant’s Azure cloud forms the infrastructure of Titanfall, the new, massively popular Xbox One and PC-exclusive video game from Electronic Arts. So what’s not to like about the cloud?

Whose Data Is It?

What’s the fundamental difference between dedicated hosting and the cloud? In the public cloud, sharing is a prerequisite — to lower the cost of compute resources, providers rely on large servers and shared tenancy. Dedicated options, meanwhile, give companies free run of an entire server, meaning the actions of other tenants won’t affect bandwidth or availability.

It’s also worth noting that despite increased uptime guarantees, cloud providers periodically experience outages. As a recent CIO Insight article notes, enterprises relying on services from Google, Microsoft and Amazon have suffered through downtime, and in some cases lost data. And as discussed by Gigaom, moving to the cloud isn’t always cheaper. Using average costs for a server with 30 gigabytes (GB) of RAM and approximately 300 GB of storage, author David Mytton found that moving to the cloud cost 250 to 500 percent more than using a dedicated hosting provider.

Security and transparency are also good reasons to go dedicated. Using a cloud server means relying on the security offered by your provider, while dedicated hosts let you choose whatever security and access controls best suit your needs.

Transparency, meanwhile, is especially critical during an outage. Cloud providers are typically unwilling to specify the exact cause of downtime or the steps taken to fix the issue, so enterprises are flying blind in the event of an outage. With a dedicated server, internal IT can go hands-on and prevent issues from reoccurring.

The Best of Both Cloud Worlds

It’s safe to say, then, that dedicated hosting isn’t dead in the enterprise space, but it’s also worth considering potential evolutions of this idea. One option is a local private cloud, which combines the scalability of cloud resources with the single tenancy of dedicated hosting.

A March 27 IT Web Business article notes that private cloud deployments are predicted to increase through 2014 as companies look for ways to balance compute power with local control. Colocation hosting is another option — here, enterprises supply their own server for use in a provider’s data center. All server maintenance, security and access is handled by local IT, and providers take care of power, network infrastructure and support.

Dedicated hosting still has a place in the enterprise IT landscape, from “traditional” deployments to options like colocation and private clouds. The trend to public alternatives continues — as augmentation, not replacement — for the dedicated enterprise server.

[image: welcomia/iStock/ThinkStockPhotos ]

IBM’s New Power Servers Fuel Impressive Data Center Optimization

In 2008, IBM combined its “System i” and “System i” server architectures to form the Power Systems product line, also known as “IBM i.” Processors in this new line initially used Power6-based chips but were upgraded to Power7 architecture in 2010, thanks to a contract with the Defense Advanced Research Projects Agency.

Now, IBM is on the cusp of its next iteration, Power8, and has also made its existing Power technology available to outside manufacturers. As a result, the first-ever third-party Power server is poised to hit the market. Is this a game changer for enterprise IT?

Taking a Peek Under the Hood

According to a March 24 article from CIO, tech manufacturer Servergy will be the first to use Power processors in its new server, the Cleantech CTS-1000. This blade server is the size of a legal pad and weighs in at just over four kilograms (nine pounds). Servergy also has plans to join IBM’s OpenPower Consortium, which focuses on developing both hardware and software to support Power systems. Other members of the consortium include Samsung Electronics, graphics chipmaker Nvidia and search giant Google.

As for the CTS-1000, Servergy hasn’t released price details or an exact configuration, but the company says they’re targeting cloud and Big Data workloads with this release. The server comes with a 1.5 gigahertz (GHz), eight-core Power processor, which has led to speculation that Power8 is under the hood. But the server also uses PCI Express 2.0 rather than 3.0 ports, which are the new standard for Power8. Bottom line? Servergy says the Cleantech only uses around 100 watts of power, even running under full load, and is “16 times or more the I/O and compute density over traditional server technology.”

Wait for Power8?

There’s no official release date for Power8, but sites like IT Jungle predict a rollout at the end of April or the beginning of May. Intel’s Xeon E5 and E7 x86 processors are already dominant in this space and have recently undergone a “refresh,” so it makes sense for IBM to launch Power8 as soon as possible; combined with third-party Power servers, this should help boost its share of the market beyond internal IBM and Unix-based hardware.

There are also rumors that Big Blue will focus on companies looking to “scale out,” or add more servers, rather than on companies hoping to “scale up” by investing in more processors or RAM. Again, this is an Intel — and to some extent an ARM — market, but if Servergy’s focus is any indication, IBM plans to target enterprises looking to make best use of the cloud, Big Data and open-source technologies.

IBM is also working with Nvidia to move beyond the PCI Express interface by using a new interconnect called NVLink. This technology allows data transfer between Nvidia GPUs and IBM Power-based processors to be five times faster, effectively allowing graphics cards to make full use of CPU memory. Data flow is currently restricted to 16 gigabytes (GB) per second; when NVLink debuts in 2016, it will support transfer rates of up to 80 GB, accounting for the “even greater bottleneck between the GPU and IBM Power CPUs, which have more bandwidth than x86 CPUs.”

Power Up and Go Colo

One possible market niche for IBM’s Power Systems and Power8 architecture is coloaction hosting. For many enterprises, the cost of building and maintaining an entire data center is prohibitive, but the thought of moving critical data to provider-managed servers is daunting.

Colocation — in which a managed service provider offers the facilities, power and support necessary to host company data — is an attractive solution because companies supply whatever hardware they prefer. With its low weight and high power, the Cleantech CTS-1000 seems like an ideal colocation candidate, especially if it runs Power8.

There’s a new server in town, powered by IBM. Cloud-facing enterprise is Big Blue’s target market, and Power architecture has real potential, especially when it comes to colocation.

[image: Wavebreak Media/ThinkStockPhotos]

Health IT Has 99 Data Problems, and Storage Management Is One of Them

Data is quickly becoming one of the most important assets — and concerns — of IT organizations. But it’s not just the data itself that’s essential; it’s also about how data is collected, stored and backed up.

A recent report sponsored by Iron Mountain, a storage and information management company, presented the results of a survey of health information technology (HIT) professionals about their data concerns and top priorities for the coming years.

The white paper, 2014 HIMSS Analytics Report – The Perfect Storm: Navigating the Health IT Archiving and Data Management Challenge, represents a survey conducted by HIMSS Analytics, a company that aims “to provide the highest quality data and analytical expertise to support improved decision-making for healthcare providers, healthcare IT companies and consulting firms.”

One hundred and fifty IT executives from randomly selected U.S. hospitals were surveyed. The respondents had to play a role in at least one of the following categories: disaster planning, purchasing of data systems or responsibility of data management.

The IT executives represented three key market segments: under 150 beds, 150–500 beds and 500-plus beds. CIOs made up 59 percent of the respondents, and IT directors accounted for 37 percent. The remaining classifications were VP of technology and CTO.

The study covers a lot of ground, and one thing it touches on is IT leaders’ fears about managing the explosion of data in modern health IT.

“It is our opinion that managing the exponential proliferation of data (e.g. storage, data back‐ups and archiving) is the next ‘monster’ hiding underneath the IT leader’s bed,” the report states.

Sifting Through the Latest Health IT Findings

Before getting into the nitty-gritty of the survey’s findings, the report points out some immediate data concerns. Because of increased healthcare regulations and requirements, diagnostic codes will quadruple. This directly translates into an increase in the data storage of any healthcare organization. Similarly, data capture and sharing requirements are putting increased pressure on healthcare organizations to develop systems to better handle collected and archived data in general.

Migrating from paper-based environments to digital collection and storage, puts much more emphasis and dependency on how, where and for what purpose data is collected.

Below is my high-level summary of the report. I encourage you to review it yourself for the complete analysis.

Data Storage and Access Needs Depend on Hospital Size

Respondents estimated the percentage of data access over specific time points across three specific data types (clinical, operational and laboratory). Interestingly, a majority of data was accessed only in the last six months and declined over three years. This was true for all three outlined data-type categories.

A majority of the data accessed was considered “active,” meaning it was stored onsite for immediate access. Clinical data tended to be the most “active” data type that was accessed, and an average of 43 terabytes was stored onsite to accomplish this. Larger hospitals (500-plus beds) stored large amounts (200 terabytes). Operational data represented about 17 terabytes (65 terabytes for large hospitals), and laboratory data was approximately 11 terabytes (28 terabytes for large hospitals).

In the larger hospitals, imaging technology was the primary cause of the large storage allotments; in smaller hospitals it was due to new EMR (Emergency Medical Retrieval) standards.

Other key survey highlights:

  • 46 percent of respondents said that data storage represented more than six percent of their IT budget.
  • 67 percent of small or midsized hospitals use storage area networks (SANs); 100 percent of large hospitals use SANs.
  • 24 percent of respondents use cloud computing storage; tape and disc storage was 62 percent.
  • 48 percent of respondents replicate their data center and applications.
  • 33 percent of respondents had capability to share the stored data.

Data Backup and Archiving Remains a Challenge

Critical to any data-heavy business is the ability to regularly and completely back up various data stores. This is not something specific to the healthcare industry alone. In fact, having a backup routine and procedure should be a requirement of any organization.

Of the health IT leaders surveyed, 42 percent said they use a variety of backup approaches that include backing up all onsite data within their facility, replicating to an offsite data center or replicating data onsite.

The healthcare industry has robust data archival requirements. Again, this can apply to other types of organizations — financial, for example — that handle large amounts of data. More than half of the respondents said that they did have an archival strategy; 46 percent said they did not.

Often, it is difficult to understand what, exactly, is deemed as “archival data,” but some of this is outlined by regulatory compliance. Most health IT leaders (83 percent) said they have an archival strategy in place because of compliance requirements.

While healthcare is heavily regulated, the data shows that compliance alone isn’t forcing archival and backup implementation and adoption.

Data Recovery and Business Continuity Needs to Be Tested

If your customers’ or patients’ data is critical, disaster recovery and business-continuity plans should be viewed as mission critical, too. Lost data can be gone forever, and when it comes to patient data, the patient might not be so understanding about the loss of potentially lifesaving information from his or her record.

Of the health IT professionals surveyed, 69 percent said they have disaster recovery solutions in place, and 66 percent have a business-continuity strategy lined up. Here are some additional highlights:

  • 82 percent comply with HIPAA guidelines.
  • 46 percent test their strategy annually.
  • 18 percent test semiannually.
  • 19 percent don’t test regularly.
  • 21 percent have experienced a disaster recovery or data loss event.

The results of this healthcare survey highlight the importance of data, and the fact that there are a variety of strategies across the industry. While larger healthcare organizations tend to have more robust strategies, the amount of associated data makes those strategies more complex. Healthcare is heavily regulated, which means that these organizations are frequently forced down particular IT paths.

IT professionals should carefully review what is mission critical: Is it the infrastructure running the organization, or is it the underlying collected data? Companies need to pay close attention to what data is being collected, how it is being stored and archived and how it can be used in the future to drive business success.

[image: Darrin Klimek/Digital Vision/ThinkStockPhotos]

5 Reasons Why Windows Server 2012 Is Worth the Upgrade

In October 2012, Microsoft debuted its new operating system, Windows 8, to mixed reviews. Technology expert Jakob Nielsen described it as a combination of two UI styles: one for use with desktops and one for mobile devices. According to Nielsen, “On a regular PC, Windows 8 is Mr. Hyde: a monster that terrorises poor office workers”; on a tablet the OS is “akin to Dr. Jekyll: a tortured soul hoping for redemption.” Technology reviewers agreed, with many IT professionals taking a pass on Windows 8.

Companies were understandably concerned about any product running on Windows 8 architecture, which included Microsoft’s server OS upgrade, Server 2012. But don’t throw the server OS baby out with the desktop OS bathwater; the replacement for Windows Server 2008 not only gets the job done but also does it better than expected. For data center pros, an upgrade to Windows Server 2012 may actually be worth the cost.

Here are five reasons why:

1. Internet Information Services 8 (IIS 8)

With Server 2012 came a revamped version of IIS, which significantly improved the information-management capabilities of Microsoft’s offering. In an article for The Register, technology consultant Trevor Pott described IIS 8 as putting an end to “more than a decade’s worth of ‘you use Windows as your Web server’ jokes.”

Why? Because it supports script precompilation, granular process throttling, centralized certificate management and SNI. In addition, Microsoft managed to incorporate a streamlined FTP server, something technology professionals have wanted for years.

2. Powershell 3.0

Sure, Powershell 3.0 doesn’t exactly change the scriptlet landscape, but it’s implemented so well in Server 2012 that it’s almost worth the cost of admission alone. Powershell 3.0 not only lets system admins handle every aspect of the server OS but also offers control over SQL, Exchange and Lync-based companion servers. Downside? The documentation can be spotty, even now. But once you find or create the right scripts, 2012 is managed with ease.

3. Optional GUI

Users love great-looking graphical user interfaces (GUIs), but this has brought trouble for Microsoft in the past — think Windows 8 and its “Live Tiles” — and for server management, a GUI can be more of a distraction than a benefit. Apparently, someone in Redmond got the message.

When you first install Server 2012, you get two options: core or full. Core is recommended, and it comes with an optional GUI. Install the GUI role on top of the core deployment and you’re ready to go, no reinstallation necessary. Use the graphical interface for ease of server configuration and, once you’re ready for deployment, simply remove the GUI role. This reduces server load and helps to limit attack surface.

4. Simplified Versions

Of course, upgrading to a new server OS always comes with the question of licensing. Server 2012 strips down the available choices and offers just two: Standard and Datacenter. Midsize companies with only a few server instances are well served by Standard, which provides full functionality and two virtual instances. Larger companies should opt for Datacenter, which provides unlimited virtual instances — this is especially helpful if you plan to run live mitigation.

5. Server 2012 R2

In October 2013, Microsoft released Windows Server 2012 R2, and it, too, boasts some notable upgrades. First is the storage-quality feature for virtual hard disks (VHDs), which lets you set minimum and maximum I/O loads for each VHD. The result? Predictable throughput across hard disks. R2 also includes a Hyper-V upgrade that lets you export virtual machines (or checkpoints) on the fly, which means you don’t need to power down virtual machines and waste precious time.

The Windows 8 architecture had its share of detractors, but Server 2012 (and R2) more than makes up for these shortcomings and effectively levels the server OS playing field.

[image: Tomasz Wyszołmirski/iStock/ThinkStockPhotos]

Tech’s Golden Jobs: Businesses Are in Search of Great IT Talent

There’s a gold rush going on, but this time it isn’t gold people are after — it’s silicon; and it isn’t in California (well, actually, a lot of it is) — it’s in the tech market. There is a shortage of qualified and available information technology job candidates. And with the tech sector booming, this shortage is getting larger.

With buzzwords like cloud computing, Big Data and software-defined networking flying around like leaves on a windy day, it’s no surprise that many tech job descriptions are laden with these same words.

A simple search for these buzzwords on LinkedIn will overwhelm you with options. The jobs — very specific, targeted jobs — are there, but the qualified candidates are not.

The Most Wanted Jobs in IT

According to U.S. News & World Report, many of the top “technology” jobs are focused on the computer, the server and even the data center. Here are the top 11:

  1. Software Developer
  2. Computer Systems Analyst ***
  3. Web Developer
  4. Information Security Analyst ***
  5. Database Administrator ***
  6. Civil Engineer
  7. Mechanical Engineer
  8. IT Manager ***
  9. Computer Programmer
  10. Computer Systems Administrator ***
  11. Computer Support Specialist ***

The asterisks are my own; they indicate the core IT jobs that relate to server and infrastructure management and maintenance. Many are quite specific; others can encompass a wide variety of responsibilities within the IT job category.

Although there are plenty of IT jobs available, you will tend to find them clustered within various tech communities around the United States. CIO.com ranks the top 10 states for IT jobs:

  1. California ($109,000)
  2. Texas ($77,900 Dallas-Ft. Worth/Arlington; $77,200 Round Rock)
  3. Florida ($70,930)
  4. Illinois ($79,800)
  5. New York ($80,700)
  6. Virginia ($82,100)
  7. Georgia ($76,060)
  8. New Jersey ($79,700)
  9. Pennsylvania ($75,660)
  10. North Carolina ($76,100)

The dollar amounts listed above appear in the CIO article and are the typical median incomes for tech jobs in these regions.

If you are in search of an IT job, it’s important to consider that jobs in other states may also come with a higher cost of living. Sure, you will make more money in these areas, but you will also most likely spend more just making ends meet.

Mining for IT Talent

Things get a bit more interesting with the injection of the data center into the mix. Cloud computing has done wonders for the IT sector, particularly as a means for companies and enterprises to save money on IT costs. Through virtualization and the use of the cloud, many businesses are choosing to forgo building and maintaining expensive data centers. But that presents an interesting conundrum.

The cloud builds cost and process efficiencies by off-loading data center capabilities, but that means the jobs attached to data center maintenance and buildouts are also reduced. Suddenly, there is less need for system administrators or IT professionals who have the skills to stand up physical server environments. Infrastructure deployments are now done with the click of a mouse or an API call and can be completely automated in some cases.

On the flip side, cloud, infrastructure and data center providers are hiring. They are desperately seeking qualified people to help architect the systems that power their services. Similarly, PC support jobs are being affected by things like Desktop as a Service, which allows for PC environments to be virtualized and hosted in sort of a traditional thin-client scenario.

If you suddenly find yourself at a company that is outsourcing all of its IT to the cloud, ZDNet’s Jason Perlow has one recommendation: “Get thee some new skills. Quickly.” What are these skills, exactly? That really depends on what you’ve previously specialized on in. Below are some recommendations:

  • Look at your traditional training and see how it maps to some of the newer job descriptions.
  • Take some online or offline courses to brush up on new skill sets.
  • Review some of the jobs that interest you. Are there certifications you can acquire that might give you an edge? For example, according to a tom’sIT Pro article, the top 2014 networking certifications are:
    • MCSE – Microsoft Certified Solutions Expert
    • CCNP – Cisco Certified Network Professional
    • RHCE – Red Hat Certified Engineer
    • CompTIA Network+
    • CWNA – Certified Wireless Network Administrator
    • CCIE – Cisco Certified Internetwork Expert
  • Talk to people who have your dream job. Engage with them on social media or elsewhere, and ask them what critical skills you need in order to do the job.
  • Look at some traditional IT positions, and see what may be a more updated or modernized position (e.g., SysAdmins might want to evolve their skill sets to reflect more of a DevOps role).
  • Read and research, and ask the experts lots of questions.

Technology is evolving at breakneck speed. Companies are struggling to keep up and fill positions with competent staff. There is a lot of competition for skilled candidates.

If you are considering a job change and you work in the IT sector, now is the time to dig in and start brushing up on your skills and marketing yourself. This Silicon Gold Rush won’t last forever, but it isn’t showing signs of slowing down (yet).

[image: AKodisinghe/iStock/ThinkStockPhotos ]