Admit it – you and me and most everyone you know outsources some of their “SEO” content. It is the textual content that needs to be on pages to make sure there is enough context to a page otherwise filled with a product picture and a buy now button. It is a right of passage as search engine optimizers to spend hours writing what we consider to be useless copy but that makes a page include the words and phrases that our audience might use to find us online, but it is a right that we quickly hand off to other providers.

But here in lies the problem – exactly how good are these writers? We know of a number of companies that provide these types of services – WriterAccess, TextBroker, and ContentRunner to name a few. They make it easy to upload titles and instructions and a writer from a large pool will deliver some SEO drivel that you can copy and paste on your site. So, how do you squeeze quality out of these.

I wanted to give a real world example of how one provider, ContentRunner is using a powerful content relevancy measurement and instruction set to help guarantee SEO quality of content. I set up a very simple experiment. I had 3 writers from different providers create content. I won’t tell you which provided which article, except for the one from ContentRunner so as not to impugn any of these providers. I chose the concept of in-memory databases because I had been experimenting with the real time analytics database memSQL.

We used RelevancyRank (RR) to determine SEO quality of a piece of content. RR is a measurement provided by the nTopic API to determine the contextual relevancy of a document to a particular keyword. The RR is on a scale of 0-10 just like PageRank and is logarithmic in nature. We know that content optimized with nTopic is statistically more likely to garner organic search traffic than those that do not, so we use it as a quality measurement.

So here are the examples all of which are between 400 and 500 words, per the instructions given to authors:

Article #1: Expensive

This article costs $33 to produce using the highest quality writer available on one of the open writing platforms. The article reads nicely, the author did some research on the subject, and he/she produced the content below. It scored a RelevancyRank of 6/10.

For many years, hard disk drives were the only practical storage option for large computer databases. This is beginning to change as memory prices fall and new operating systems introduce higher RAM limits. The in-memory database model provides a viable alternative that uses RAM chips instead of hard disks. Many data storage experts believe that more and more companies will adopt this technology as they recognize its advantages.

In-memory databases read, save and modify data in RAM. Users don’t have to wait for the computer to locate information on a hard drive. Memory works faster because it doesn’t have any mechanical parts. Furthermore, it allows for efficient data transfers that don’t make the processor work as hard. Different estimates indicate that IMDBs run eight to 100 times faster than disk-based systems. Data updates are particularly fast in RAM.

Another benefit of IMDBs is that they occupy less space. They minimize access times without the need to store redundant information in table indexes. Database software developers know that computers have less RAM than hard disk space, so they strive to use it efficiently. Smaller database files help compensate for the comparatively high cost of memory. They also expedite backups and reduce archive sizes.

This database model provides several other minor benefits. When server administrators populate databases with large quantities of data, they can speed up the process by using memory-based systems. These databases also offer excellent scalability. An IMDB can handle over one terabyte of data without slowing down, according to McObject.

Memory chips operate more reliably than hard disks. Components without moving parts simply don’t need as much maintenance, and they’re less vulnerable to physical shocks. Although RAM remains relatively expensive to replace, it is rapidly becoming more affordable. Memory prices drop about 25 percent each year, according to ITworld.

In-memory databases do have a few disadvantages. Memory loses data when electrical power is disconnected, so it’s important to take precautions. Computer operators must maintain battery backup systems and use non-volatile media to create backups. Another problem is that older operating systems cannot manage enough RAM to store huge databases. Some companies may need to upgrade their software before they can employ this database model.

Although they offer a variety of benefits, faster performance is the main advantage of using in-memory databases. This makes them desirable for large stock brokerages, instant communication systems, analytics applications and websites that supply real-time data. Businesses are likely to use in-memory databases for additional purposes as the cost of implementation falls and developers continue to enhance their IMDB software.

Article #2: Inxpensive

This article costs $9 to produce using the second highest quality writer available on one of the open writing platforms. The article is sufficient although it doesn’t flow quite as well as the other. However, it scored a RelevancyRank of 6/10 as well. There was no performance gain between the high end and low end provider. This is to be expected. Unless the writer knows what words and phrases ought to be used to maximize content relevancy, they are unlikely to create fully relevant content..

An in-memory database is a database that uses the system’s main memory for data storage as opposed to disk-based storage. Typical disk-based storage devices include those such as floppy disks, CDs or DVDs, and blu-rays – generally speaking, it means any storage device where data is recorded to the surface of a moving disk. Storage devices such as memory cards and USB flashdrives rely on flash memory to operate, and are also completely differentiated from in-memory data storage.

Sometimes also referred to as an IMDB, a main memory database, or a memory resident database, the in-memory database are most commonly seen in applications where a quick response time is critically important. Because the data is written into the main system rather than on a disk or stored elsewhere, the system is able to retrieve said data at a faster rate. In-memory databases are also optimized for efficiency in reducing memory consumption and CPU cycles, keeping less redundant data on devices than disk-based storage systems do.

Although other forms of memory are much cheaper, in-memory data, once perfected, could be capable of retrieving data up to a thousand times faster. The current fastest database in the world is capable of simultaneously processing millions of transactions across one terabyte of data on a single server, benefiting from the fact that there are fewer steps involved in managing data. This is because the main database and the applications it powers are sharing RAM, benefiting from an in-memory storage system. The interest in producing and developing in-memory databases is driven by the need for even faster response times and the desire to lift the limitations of disk-based storage systems.

Copies of the in-memory database to other data storage systems can help protect against crashes or main system failures, but most in-memory databases also offer features to help preserve data in case this kind of event does happen. One example of this would be “transaction logging”, in which the system takes periodic “snapshots” of the in-memory database. They act as save points in the event of a system failure and may be referred to as “checkpoint images” instead of “snapshots”. Even if the system fails and needs to be restarted, it can rely on these “snapshots” or “checkpoints” to either roll back to the last completed transaction or move forward to complete a partially completed transaction cut off when the system went down.

However, with the introduction of modern NVDIMM technology an in-memory database can now run at full power and maintain data even in the event of an unexpected power failure or system crash.

Article #3: ContentRunner and nTopic

This article costs in the middle ground $20. I could have bid less on the ContentRunner platform, but I wanted to get a middle of the road feel. ContentRunner, unlike the other providers, has directly integrated nTopic for free into their system. You can request guaranteed minimum levels. Subsequently, for $20, I scored and article with a RelevancyRank of 8/10. The relevancy performance level was dramatic and represents comparatively the same difference one would expect between a PR6 site and a PR8 site. Content this relevant is rare, and it doesn’t have to be expensive if you are using computer assisted content creation tools like nTopic.

An in-memory database, which is commonly referred to as a memory resident database, or a main memory database system, is a relatively basic database management system used for the storage of computer data. In-memory databases principally utilize main memory for their data storage.

There are many different benefits of an in-memory database; the first and most prominent being that the memory is located and stored internally. A traditional database generally relies on a disk-based storage system, which obviously brings an external element into play, such as a hard drive. While in-memory databases are not always as reliable as their traditional counterparts, they have a smaller amount of CPU instructions than disk-optimizing databases, and feature simpler and easier internal optimization algorithms than disks do. As such, the primary benefit associated with an in-memory database is its impressive and noteworthy response time. The speed at which data from an in-memory database is accessed is notably quicker than disk-optimizing databases, making in-memory the go-to choice for any application where response time is a key component. High-Performance Computing and Big Data applications – such as Mobile advertising and telecommunications network equipment – often utilize the benefits of in-memory databases.

As technology continues to grow, and ideas are steadily innovated, the in-memory database improves. As aforementioned, in-memory databases are often eschewed for disk-based storage systems because a traditional storage system’s reliability can more than offset an in-memory database’s quick and instant access. However, that parity is rapidly disappearing. Hybrid databases have recently combined the best of both worlds, giving users the reliability of a traditional system, mixed with the practicality of an in-memory database. These hybrid systems are becoming more and more popular every day.

However, the in-memory database’s most impressive innovation is also one of its most appealing and beneficial features. In-memory databases now take advantage of Non-Volatile Dual In-Line Memory Modules (or NVDIMM for short), which has taken their capabilities to the next level. NVDIMMs allow a database to retain stored data even in the event of a loss of power. Whether power is unexpectedly lost, a system crashes and shuts down, or a user simply removes the power, NVDIMMs allow in-memory databases the capability to maintain their entire data storage while running at full speed, regardless of whether or not they have access to power.

While the benefits of in-memory databases are not necessarily applicable to the average person, they can be wildly beneficial in the right context. In fact, in-memory databases can perform data management functions at a full order of magnitude faster than their disk-utilizing counterparts. And with innovation, invention, and creation constantly at play in the tech world, in-memory databases are only going to get better, and continue to improve database management.

Takeaways

Hopefully we will see growth soon in nTopic usage for content creation. You can do this yourself but that, in many ways, defeats the purpose of outsourced content which lowers your required effort levels. Or, of course, you can go to a site like ContentRunner that has already included it directly into their content writing process. There are other providers that are working on including similar features, so hopefully soon we will see more and more focus on SEO content quality coming out of bulk content providers.