Light fields are able to capture light rays from a scene arriving at different angles, which allows post-capture rendering applications such as interactive viewpoint selection or refocusing. However, this additional angular information comes at the price of a significant increase of the data volume compared to traditional 2D images. While light field compression is still an ongoing research effort, showing impressive compression gain with the latest coding standard, light fields are in practice often stored on remote servers to avoid consuming unnecessary storage of the user devices. A typical cost-effective solution for light field visualisation is then to render the requested image on the server and transmit the result to the user. Another trivial solution would be to directly send the light field to the user and perform the rendering process directly on the client side to avoid transmission delay. While the latter solution seems instinctively less optimal and is usually discarded in previous work because of an expected unacceptable startup delay, we propose a quantitative study to compare both solutions in terms of rate-distortion (RD) performance. A counterintuitive finding of this paper is that accepting a reasonable startup delay (a few seconds) can provide a significant improvement of the RD performances.