Maximum concurrent request for tensorflow serving client?

Im using tensorflow serving for making predictions from Java application to the tensorflow server, wanted to know what is the maximum number of concurrent request tensorflow server can handle ?

Also, if we can improve the concurrency by having more number of threads doing the prediction on the client…is the prediction done sequentially or parallel threads in the server do prediction, also is it customizable?

Thanks a lot!