I find it easier to repeat the test simply e.g. Personally I do not micro benchmarks so much. The main benefit it has is that it aids repeatability by getting rid of other noise artefacts and warming things up. Using Benchmark.NET does not make systematic measurement errors go away. Your test might be problematic due to the newly created MemoryStream in DeserializeProtobuf which could measure the allocation costs of the MemoryStream and not the actual serializer performance. This is over two times faster which is not bad. Below are some numbers to derialize 1m Book objects: See for a full test suite with many different serializers where you can also compare. To be really sure if you measure things right you should check under a profiler. But again, your mileage will vary heavily depending on what format your JSON is in. So overall, Protobuf wins again and by a bigger margin this time than our Serialization effort (When it comes to percentage). We are just deserializing back into a Person object. I know it’s a big bit of code to sift through but it’s all relatively simple. Using (var memoryStream = new MemoryStream(PersonBytes)) PersonString = JsonConvert.SerializeObject(person) Using (var memoryStream = new MemoryStream()) Public class ProtobufVsJSONDeserializeBenchmark Atleast without making the test more complicated than it needed to be.
0 Comments
Leave a Reply. |