Friday, 27 February 2015

Data Mining Explained

Overview

Data mining is the crucial process of extracting implicit and possibly useful information from data. It uses analytical and visualization techniques to explore and present information in a format which is easily understandable by humans.

Data mining is widely used in a variety of profiling practices, such as fraud detection, marketing research, surveys and scientific discovery.

In this article I will briefly explain some of the fundamentals and its applications in the real world.

Herein I will not discuss related processes of any sorts, including Data Extraction and Data Structuring.

The Effort

Data Mining has found its application in various fields such as financial institutions, health-care & bio-informatics, business intelligence, social networks data research and many more.

Businesses use it to understand consumer behavior, analyze buying patterns of clients and expand its marketing efforts. Banks and financial institutions use it to detect credit card frauds by recognizing the patterns involved in fake transactions.

The Knack

There is definitely a knack to Data Mining, as there is with any other field of web research activities. That is why it is referred as a craft rather than a science. A craft is the skilled practicing of an occupation.

One point I would like to make here is that data mining solutions offers an analytical perspective into the performance of a company depending on the historical data but one need to consider unknown external events and deceitful activities. On the flip side it is more critical especially for Regulatory bodies to forecast such activities in advance and take necessary measures to prevent such events in future.

In Closing

There are many important niches of Web Data Research that this article has not covered. But I hope that this article will provide you a stage to drill down further into this subject, if you want to do so!

Should you have any queries, please feel free to mail me. I would be pleased to answer each of your queries in detail.

Source: http://ezinearticles.com/?Data-Mining-Explained&id=4341782

Thursday, 26 February 2015

Know the Different Types of Mining Processes

Mining has become a controversial industry because of its "devastating" effect to the environment and the ecosystem. However, it has contributed so much to civilization that without it, we could never be where we are today in many aspects.

There are two basic methods of mining. These are the surface and the underground mining processes:

1. Surface Mining

This involves the mining of minerals located at or near the surface of the earth. This encompasses at least six processes and these are:

• Strip Mining - this involves the stripping of the earth's surface by heavy machinery. This method is generally targeted at extracting coal or sedimentary rocks that lay near the earth's surface.

• Placer Mining - this involves the extraction of sediments in sand or gravel. It is a simple, old-fashioned way of mining. This method is generally applicable to gold and precious gems that are carried by the flow of water.

• Mountain Top Mining - this is a new method which involves blasting of a mountain top to expose coal deposits that lie underneath the mountain crest.

• Hydraulic Mining - this is an obsolete method that involves jetting the side of a mountain or hill with high pressure water to expose gold and other precious metals.

• Dredging - it involves the removal of rocks, sand and silt underneath a body of water to expose the minerals.

• Open Pit - this is the most common mining method. It involves the removal of the top layers of soil in search for gold or buried treasure. The miner digs deeper and deeper until a large, open-pit is created.

2. Underground Mining

This is the process in which a tunnel is made into the earth to find the mineral ore. The mining operation is usually performed with the use of underground mining equipment. Underground mining is done through the following methods:

• Slope Mining - it involves the creation of slopes into the ground in order to reach the ore or mineral deposit. This process is generally applied in coal mining.

• Hard rock - this method uses dynamite or giant drills to create large, deep tunnels. The miners support the tunnels with pillars to prevent them from collapsing. This is a large-scale mining process and is usually applied in the extraction of large copper, tin, lead, gold or silver deposits.

• Drift mining - this method is applicable only when the target mineral is accessible from the side of a mountain. It involves the creation of a tunnel that's slightly lower than the target mineral. The gravity makes the deposit fall to the tunnel where miners can collect them.

• Shaft method - this involves the creation of a vertical passageway that goes deep down underground where the deposit is located. Because of the depth, miners are brought in and out of the pit with elevators.

• Borehole method - this involves the use of a large drill and high pressure water to eject the target mineral.

These are the basic methods used in the extraction of common minerals. There are more complex systems, but still, they are based on these fundamental processes.

Source: http://ezinearticles.com/?Know-the-Different-Types-of-Mining-Processes&id=7932442

Sunday, 22 February 2015

Data Mining in the 21st Century: Business Intelligence Solutions Extract and Visualize

When you think of the term data mining, what comes to mind? If an image of a mine shaft and miners digging for diamonds or gold comes to mind, you're on the right track. Data mining involves digging for gems or nuggets of information buried deep within data. While the miners of yesteryear used manual labor, modern data minors use business intelligence solutions to extract and make sense of data.

As businesses have become more complex and more reliant on data, the sheer volume of data has exploded. The term "big data" is used to describe the massive amounts of data enterprises must dig through in order to find those golden nuggets. For example, imagine a large retailer with numerous sales promotions, inventory, point of sale systems, and a gift registry. Each of these systems contains useful data that could be mined to make smarter decisions. However, these systems may not be interlinked, making it more difficult to glean any meaningful insights.

Data warehouses are used to extract information from various legacy systems, transform the data into a common format, and load it into a data warehouse. This process is known as ETL (Extract, Transform, and Load). Once the information is standardized and merged, it becomes possible to work with that data.

Originally, all of this behind-the-scenes consolidation took place at predetermined intervals such as once a day, once a week, or even once a month. Intervals were often needed because the databases needed to be offline during these processes. A business running 24/7 simply couldn't afford the down time required to keep the data warehouse stocked with the freshest data. Depending on how often this process took place, the data could be old and no longer relevant. While this may have been fine in the 1980s or 1990s, it's not sufficient in today's fast-paced, interconnected world.

Real-time EFL has since been developed, allowing for continuous, non-invasive data warehousing. While most business intelligence solutions today are capable of mining, extracting, transforming, and loading data continuously without service disruptions, that's not the end of the story. In fact, data mining is just the beginning.

After mining data, what are you going to do with it? You need some form of enterprise reporting in order to make sense of the massive amounts of data coming in. In the past, enterprise reporting required extensive expertise to set up and maintain. Users were typically given a selection of pre-designed reports detailing various data points or functions. While some reports may have had some customization built in, such as user-defined date ranges, customization was limited. If a user needed a special report, it required getting someone from the IT department skilled in reporting to create or modify a report based on the user's needs. This could take weeks - and it often never happened due to the hassles and politics involved.

Fortunately, modern business intelligence solutions have taken enterprise reporting down to the user level. Intuitive controls and dashboards make creating a custom report a simple matter of drag and drop while data visualization tools make the data easy to comprehend. Best of all, these tools can be used on demand, allowing for true, real-time ad hoc enterprise reporting.

Source: http://ezinearticles.com/?Data-Mining-in-the-21st-Century:-Business-Intelligence-Solutions-Extract-and-Visualize&id=7504537

Thursday, 19 February 2015

Internet Data Mining - How Does it Help Businesses?

Internet has become an indispensable medium for people to conduct different types of businesses and transactions too. This has given rise to the employment of different internet data mining tools and strategies so that they could better their main purpose of existence on the internet platform and also increase their customer base manifold.

Internet data-mining encompasses various processes of collecting and summarizing different data from various websites or webpage contents or make use of different login procedures so that they could identify various patterns. With the help of internet data-mining it becomes extremely easy to spot a potential competitor, pep up the customer support service on the website and make it more customers oriented.

There are different types of internet data_mining techniques which include content, usage and structure mining. Content mining focuses more on the subject matter that is present on a website which includes the video, audio, images and text. Usage mining focuses on a process where the servers report the aspects accessed by users through the server access logs. This data helps in creating an effective and an efficient website structure. Structure mining focuses on the nature of connection of the websites. This is effective in finding out the similarities between various websites.

Also known as web data_mining, with the aid of the tools and the techniques, one can predict the potential growth in a selective market regarding a specific product. Data gathering has never been so easy and one could make use of a variety of tools to gather data and that too in simpler methods. With the help of the data mining tools, screen scraping, web harvesting and web crawling have become very easy and requisite data can be put readily into a usable style and format. Gathering data from anywhere in the web has become as simple as saying 1-2-3. Internet data-mining tools therefore are effective predictors of the future trends that the business might take.

If you are interested to know something more on Web Data Mining and other details, you are welcome to the Screen Scraping Technology site.

Source: http://ezinearticles.com/?Internet-Data-Mining---How-Does-it-Help-Businesses?&id=3860679

Wednesday, 18 February 2015

Revitalize and Refresh Your Home With a Dry Organic Deep Extraction Carpet Cleaning

While everyone is familiar with the old-style of water intensive steam-based carpet cleaning methods, few are aware of the benefits of high powered dry extraction carpet cleaning technology. With the environmental concerns of today, and water shortages throughout the country, dry extraction carpet cleaning is starting to gain popularity. This method employs the use of vigorous agitation, deep cleaning organic and biodegradable cleansing materials, and high powered vacuum extraction, to rejuvenate and cleanse deep into the carpet fibers.

The agitation system is composed of two counterrotating nylon brushes which are safe for any synthetic and natural carpet fiber. Natural material carries the cleaning agents and is spread similar to that of carpet powder. High vacuum pressure utilizing HEPA filtration extracts deep down dirt, grime and mold particles. Dry extraction carpet cleaning, while utilizing no water, will leave the carpet ready to walk on as soon as the cleaning is finished.

With old-style steam carpet cleaning it is oftentimes required to use several hundred gallons of water to achieve the same results. And while this type of carpet cleaning may seem less expensive, what many of these companies don't tell you, is that the water they will be using will come from your own tap. Many of the cheapest steam cleaning companies will simply utilize steam cleaning machines which will pump the used water back into your yard.

Dry extraction carpet cleaning requires no additional or hidden costs from the customer. The equipment is lightweight and easy to maneuver, allowing the technician to finish the job usually in half the time of conventional steam methods. Utilizing completely biodegradable organic carrier agents, this material is worked into the carpet to achieve the cleaning and then extracted through high-powered vacuum. The twin brushed agitation method stretches and extends the carpet pile, leaving a texture similar to that of freshly laid carpet.

The scent is pleasant and not overwhelming, leaving the home smelling fresh. Dry extraction carpet cleaning has been around for quite a few years commercialy, but only now is it starting to gain recognition and serious competition to other carpet cleaning services. When looking around for your next carpet cleaning service, consider a dry deep extraction system. With this method there are no harsh chemicals or cleaning agents that can harm your carpets, whether they are wool, shag, cut pile or premium import.

Try dry extraction cleaning the next time you want your carpet deep down clean, you won't be disappointed.

For the absolute best home cleaning and maid service on Metro North Atlanta. MaidPro can get the job done, you dirty it, we can clean it- guaranteed! We only use safe, organic, hypoallergenic, cleaning supplies and systems.

Source:http://ezinearticles.com/?Revitalize-and-Refresh-Your-Home-With-a-Dry-Organic-Deep-Extraction-Carpet-Cleaning&id=1608594

Sunday, 15 February 2015

Keene Labs: Create Beautiful Word Clouds With Scraped Data

Without innovation it’s impossible for consultancies to offer services that represent real added value. So we test innovative new technologies by pushing them through Keene Labs. Our dedicated project area where we work with open source communities to develop new products that improve our services.

One of the tools we’ve been delivering client work and experimenting with is called import.io. Through using crawlers, the tool can turn any webpage into raw data – this in turn can be manipulated in a number of ways. The tool itself was built by a young tech startup based in London who I had the pleasure of meeting at the end of last year.

In the screen capture video below I show you how you can use import.io to scrape data from a blog and then present this data as a beautiful word cloud using Tagul. It’s my first screen capture ever – so don’t pass too much negative judgment!

Source:http://www.keenecomms.com/2015/02/05/keene-labs-create-beautiful-word-clouds-with-scraped-data/

Monday, 9 February 2015

I Don’t Need No Stinking API: Web Scraping For Fun and Profit

If you’ve ever needed to pull data from a third party website, chances are you started by checking to see if they had an official API. But did you know that there’s a source of structured data that virtually every website on the internet supports automatically, by default?
scraper toolThat’s right, we’re talking about pulling our data straight out of HTML — otherwise known as web scraping. Here’s why web scraping is awesome:

Any content that can be viewed on a webpage can be scraped. Period.

If a website provides a way for a visitor’s browser to download content and render that content in a structured way, then almost by definition, that content can be accessed programmatically. In this article, I’ll show you how.

Over the past few years, I’ve scraped dozens of websites — from music blogs and fashion retailers to the USPTO and undocumented JSON endpoints I found by inspecting network traffic in my browser.

There are some tricks that site owners will use to thwart this type of access — which we’ll dive into later — but they almost all have simple work-arounds.

Why You Should Scrape

But first we’ll start with some great reasons why you should consider web scraping first, before you start looking for APIs or RSS feeds or other, more traditional forms of structured data.

Websites are More Important Than APIs

The biggest one is that site owners generally care way more about maintaining their public-facing visitor website than they do about their structured data feeds.

We’ve seen it very publicly with Twitter clamping down on their developer ecosystem, and I’ve seen it multiple times in my projects where APIs change or feeds move without warning.

Sometimes it’s deliberate, but most of the time these sorts of problems happen because no one at the organization really cares or maintains the structured data. If it goes offline or gets horribly mangled, no one really notices.

Whereas if the website goes down or is having issues, that’s a more of an in-your-face, drop-everything-until-this-is-fixed kind of problem, and gets dealt with quickly.

No Rate-Limiting

Another thing to think about is that the concept of rate-limiting is virtually non-existent for public websites.

Aside from the occasional captchas on sign up pages, most businesses generally don’t build a lot of defenses against automated access. I’ve scraped a single site for over 4 hours at a time and not seen any issues.

Unless you’re making concurrent requests, you probably won’t be viewed as a DDOS attack, you’ll just show up as a super-avid visitor in the logs, in case anyone’s looking.

Anonymous Access

There are also fewer ways for the website’s administrators to track your behavior, which can be useful if you want gather data more privately.

With APIs, you often have to register to get a key and then send along that key with every request. But with simple HTTP requests, you’re basically anonymous besides your IP address and cookies, which can be easily spoofed.

The Data’s Already in Your Face

Web scraping is also universally available, as I mentioned earlier. You don’t have to wait for a site to open up an API or even contact anyone at the organization. Just spend some time browsing the site until you find the data you need and figure out some basic access patterns — which we’ll talk about next.

Let’s Get to Scraping

So you’ve decided you want to dive in and start grabbing data like a true hacker. Awesome.

Just like reading API docs, it takes a bit of work up front to figure out how the data is structured and how you can access it. Unlike APIs however, there’s really no documentation so you have to be a little clever about it.

I’ll share some of the tips I’ve learned along the way.

Fetching the Data

So the first thing you’re going to need to do is fetch the data. You’ll need to start by finding your “endpoints” — the URL or URLs that return the data you need.

If you know you need your information organized in a certain way — or only need a specific subset of it — you can browse through the site using their navigation. Pay attention to the URLs and how they change as you click between sections and drill down into sub-sections.

The other option for getting started is to go straight to the site’s search functionality. Try typing in a few different terms and again, pay attention to the URL and how it changes depending on what you search for. You’ll probably see a GET parameter like q= that always changes based on you search term.

Try removing other unnecessary GET parameters from the URL, until you’re left with only the ones you need to load your data. Make sure that there’s always a beginning ? to start the query string and a & between each key/value pair.

Dealing with Pagination

At this point, you should be starting to see the data you want access to, but there’s usually some sort of pagination issue keeping you from seeing all of it at once. Most regular APIs do this as well, to keep single requests from slamming the database.

Usually, clicking to page 2 adds some sort of offset= parameter to the URL, which is usually either the page number or else the number of items displayed on the page. Try changing this to some really high number and see what response you get when you “fall off the end” of the data.

With this information, you can now iterate over every page of results, incrementing the offset parameter as necessary, until you hit that “end of data” condition.

The other thing you can try doing is changing the “Display X Per Page” which most pagination UIs now have. Again, look for a new GET parameter to be appended to the URL which indicates how many items are on the page.

Try setting this to some arbitrarily large number to see if the server will return all the information you need in a single request. Sometimes there’ll be some limits enforced server-side that you can’t get around by tampering with this, but it’s still worth a shot since it can cut down on the number of pages you must paginate through to get all the data you need.

AJAX Isn’t That Bad!

Sometimes people see web pages with URL fragments # and AJAX content loading and think a site can’t be scraped. On the contrary! If a site is using AJAX to load the data, that probably makes it even easier to pull the information you need.

The AJAX response is probably coming back in some nicely-structured way (probably JSON!) in order to be rendered on the page with Javscript.

All you have to do is pull up the network tab in Web Inspector or Firebug and look through the XHR requests for the ones that seem to be pulling in your data.

Once you find it, you can leave the crufty HTML behind and focus instead on this endpoint, which is essentially an undocumented API.

(Un)structured Data?

Now that you’ve figured out how to get the data you need from the server, the somewhat tricky part is getting the data you need out of the page’s markup.

Use CSS Hooks

In my experience, this is usually straightforward since most web designers litter the markup with tons of classes and ids to provide hooks for their CSS.

You can piggyback on these to jump to the parts of the markup that contain the data you need.

Just right click on a section of information you need and pull up the Web Inspector or Firebug to look at it. Zoom up and down through the DOM tree until you find the outermost <div> around the item you want.

This <div> should be the outer wrapper around a single item you want access to. It probably has some class attribute which you can use to easily pull out all of the other wrapper elements on the page. You can then iterate over these just as you would iterate over the items returned by an API response.

A note here though: the DOM tree that is presented by the inspector isn’t always the same as the DOM tree represented by the HTML sent back by the website. It’s possible that the DOM you see in the inspector has been modified by Javascript — or sometime even the browser, if it’s in quirks mode.

Once you find the right node in the DOM tree, you should always view the source of the page (“right click” > “View Source”) to make sure the elements you need are actually showing up in the raw HTML.

This issue has caused me a number of head-scratchers.

Get a Good HTML Parsing Library

It is probably a horrible idea to try parsing the HTML of the page as a long string (although there are times I’ve needed to fall back on that). Spend some time doing research for a good HTML parsing library in your language of choice.

Most of the code I write is in Python, and I love BeautifulSoup for its error handling and super-simple API. I also love its motto:

    You didn’t write that awful page. You’re just trying to get some data out of it. Beautiful Soup is here to help. :)

You’re going to have a bad time if you try to use an XML parser since most websites out there don’t actually validate as properly formed XML (sorry XHTML!) and will give you a ton of errors.

A good library will read in the HTML that you pull in using some HTTP library (hat tip to the Requests library if you’re writing Python) and turn it into an object that you can traverse and iterate over to your heart’s content, similar to a JSON object.

Some Traps To Know About

I should mention that some websites explicitly prohibit the use of automated scraping, so it’s a good idea to read your target site’s Terms of Use to see if you’re going to make anyone upset by scraping.

For two-thirds of the website I’ve scraped, the above steps are all you need. Just fire off a request to your “endpoint” and parse the returned data.

But sometimes, you’ll find that the response you get when scraping isn’t what you saw when you visited the site yourself.

When In Doubt, Spoof Headers

Some websites require that your User Agent string is set to something they allow, or you need to set certain cookies or other headers in order to get a proper response.

Depending on the HTTP library you’re using to make requests, this is usually pretty straightforward. I just browse the site in my web browser and then grab all of the headers that my browser is automatically sending. Then I put those in a dictionary and send them along with my request.

Note that this might mean grabbing some login or other session cookie, which might identify you and make your scraping less anonymous. It’s up to you how serious of a risk that is.

Content Behind A Login

Sometimes you might need to create an account and login to access the information you need. If you have a good HTTP library that handles logins and automatically sending session cookies (did I mention how awesome Requests is?), then you just need your scraper login before it gets to work.

Note that this obviously makes you totally non-anonymous to the third party website so all of your scraping behavior is probably pretty easy to trace back to you if anyone on their side cared to look.

Rate Limiting

I’ve never actually run into this issue myself, although I did have to plan for it one time. I was using a web service that had a strict rate limit that I knew I’d exceed fairly quickly.

Since the third party service conducted rate-limiting based on IP address (stated in their docs), my solution was to put the code that hit their service into some client-side Javascript, and then send the results back to my server from each of the clients.

This way, the requests would appear to come from thousands of different places, since each client would presumably have their own unique IP address, and none of them would individually be going over the rate limit.

Depending on your application, this could work for you.

Poorly Formed Markup

Sadly, this is the one condition that there really is no cure for. If the markup doesn’t come close to validating, then the site is not only keeping you out, but also serving a degraded browsing experience to all of their visitors.

It’s worth digging into your HTML parsing library to see if there’s any setting for error tolerance. Sometimes this can help.

If not, you can always try falling back on treating the entire HTML document as a long string and do all of your parsing as string splitting or — God forbid — a giant regex.



Well there’s 2000 words to get you started on web scraping. Hopefully I’ve convinced you that it’s actually a legitimate way of collecting data.

It’s a real hacker challenge to read through some HTML soup and look for patterns and structure in the markup in order to pull out the data you need. It usually doesn’t take much longer than reading some API docs and getting up to speed with a client. Plus it’s way more fun!

Source: https://blog.hartleybrody.com/web-scraping/