Welcome everyone, to One Identity's Virtual Unite Conference. My name is Neil Boyd. I'm the syslog-ng sales specialist for One Identity. My job today is to give you a virtual introduction to log management, and why it is so important in today's high velocity IT environments. Log domain was once the domain of developers and support. It was originally used for application teams and level 3 support Teams for deep dive troubleshooting on applications or IT transactions.
The large well-known correlation engines themselves were actually sold originally to operations teams to streamline IT operations. But as IT security issues became more and more prevalent, the role of the correlation engines really grew. And today you know these applications as SIEMs, or security information event management platforms. Hacking, privacy concerns, information security, all of that has increased the dependence on log data as your SOC's main data source.
That's resulted in an explosion of data requirements across all firms. More infrastructure and endpoints to monitor has meant an explosion in data requirements. The problem is that the quantities of data are massive, and the delivery and ingestion of all of this log data, the problems associated with that, are not trivial. And of course, your costs just keep going up and up. Now, there are standards to help you manage all of this data. Log data is governed by RFC 5424, the syslog protocol.
And it governs such things as standard message format. It governs the relevant terms and definitions. How messages are handled. It has a ranking facility for severity standards. And even allows for vendor specific extensions in a structured way. syslog-ng, which is One Identity's log management platform, follows the RFC 5424 protocol. And the previous 3164, which is now obsolete.
But because nothing is ever easy, many vendors do not actually conform to those standards. And this adds further complexity to your IT Teams efforts to send data in the proper format to your SIEM. Now, many of the SIEM vendors ship their products with their own agents or collectors to ingest data into their platforms. And while that's well and good for those platforms, these agents tend not to integrate well with other platforms.
They're very good at getting data into their own platform, but not so good at getting data into other platforms. And really, this is a core responsibility of good log management. You need a flexible and agnostic log management platform that can ingest from any source, and feed any destination with all of your data properly formatted and ready for further processing. It's how we assure downstream data quality. And in fact, data quality is really what it's all about.
By centrally managing your log data, you can lower the complexity of data sources and network routes. SIEM vendors charge on data consumed or processors required to. Your costs are further impacted by data storage requirements, which may be driven by security or compliance requirements. The data explosion has driven your SOC operation's costs higher and higher, but by adding a log management layer, you can address these issues and lower your costs.
So what are some of the benefits of log management? First is data quality. A log management layer is an abstraction layer between log source producers and log consumers. Because the sources themselves are varied, the logs they send are not all the same. And while the destinations they are heading to also have their own protocols for ingestion, the log management layer will normalize this data, parse it for fast processing downstream, filter off unnecessary logs, and feed the downstream applications in real time with the formats they require.
This ultimately reduces your costs, first by optimizing the SIEM by sending it only the data it requires, and potentially reducing the infrastructure in place to do this. Finally, it adds to the security of the firm by encrypting this data both in motion and at rest. A standard that we see many of our customers already requiring their IT Teams to deliver on.
And that brings us to our part in this story. syslog-ng has been there since the very beginning. Starting out as a project by a Hungarian grad student, he built syslog-ng back in 1998. He and his partners formed Balabit and introduced the open source version of syslog-ng, which was widely accepted around the world and grew to become the de facto industry standard.
By 2007, Premium Edition was introduced. And then in 2008, syslog-ng Store Box, which is an appliance log management device that runs syslog-ng PE at its core, that product was introduced. The other point I would like to add on this slide is that in 2018, Balabit was acquired by One Identity, and that's how I got here.
Something that is really important to point out here, is that this is really a mature market. There are a lot of competitors. syslog-ng, however, has an advantage today that no one has been able to match, and that's in its scalability and performance. The software was written in C. It was designed from the very beginning to scale massively, to be deployed in global disparate environments.
Today, syslog-ng PE sits at the heart of the world's most demanding IT environments in financial services and telecom, manufacturing and government. So now let's look at an example of a high level architecture to give you an idea about what this looks like in your environment. Over here on the left, you can see different data sources. These are all generating logs. And they're sending ultimately to these downstream destinations. Maybe you only have a couple of destinations, such as your SIEM and maybe some Kafka destinations for development, or some databases.
All of these sources, whether they're virtual machines, databases, security devices, network devices, servers, be they Windows or Linux, need to be able to send logs ultimately to their destinations in a fast and readable format that these destinations can consume from. By