Many big data projects run into the 80/20 rule. 80% of resources is spent getting data into their analytic tools and only 20% on analyzing the data. syslog-ng can deliver data from a wide variety of sources to Hadoop, Elasticsearch, MongoDB, and Kafka as well as many others.
Wide Variety of Data
Delivering data in disparate formats from systems, applications, and devices often requires multiple tools and special integration.
Massive data volumes
Big data is, well, big. Many data sources can overwhelm data collection tools.
Difficult to access data
Most big data systems capture data from complex, distributed systems, often from multiple remote sites with a variety of connectivity and latency issues.
Insights based on incomplete data are often wrong. In large environments, it’s easy to leak data during collection and ingestion.
High data ingestion costs
Getting data into data stores is often the most time-consuming and costly part of big data projects.
Varying data consumer requirements
Big data systems often serve a variety of data consumers, each having their own requirements.