The shift from analogue to digital sources and then to networked sources has implications for the seeker of good quality sound reproduction at home. Then and now, the amplification and output stages are important. But the source has likely changed and that has implications for what constitutes a desirable system.
Olden days: all-analogue
In the old days the source was analogue, and many different parts had a role to play in generating the source signal. Consider the case with vinyl: the generation of a standard line-out level signal to an amplifier involves the cartridge (i.e. either a moving coil or moving magnet and relatedly the tracking force, angle of the tone arm, etc), the turntable in general (e.g. wow and flutter in the rotation), and phono-stage amplification to boost the signal to line level.
Any part could be a weak link here, and so to ensure good quality source signal each of these pieces needed to be up to snuff. This is great for the audiophile stereotype — a deep-pocketed fetishist for gear with good levels of technical understanding — but this is probably not a large subset of the population. Of course one effect of this is that no digital-to-analogue conversion is required at any point, because the whole signal path is analogue.
Recent: digital physical media
Physical digital media store sound as data — in the case of CDs this is two channels of 16-bit pulse code modulation encoded at 44100 samples per second. Most CD players will have a DAC built in, and so provide analogue outputs.
Now: networked digital source
Now most have home networking, data can be obtained from the network. So our source device needs to do three things: provide some kind of interface (this may be via another device on the network like a smartphone), get audio data from the network, and undertake the DAC stage. A few years ago this might have been a more novel device like a Logitech Squeezebox Touch which was something of a trailblazer in 2010 when it was released; a decade later most of the major audio equipment brands (Naim, Arcam, etc) offer a device that will do this.
Where the data is from does not matter, as long as the player can obtain and decode or do whatever processing it needs to fast enough to be able to then meet the sample rate requirements of the DAC. So the serving device can be pretty much any sort of general purpose computer, like a standard desktop computer that is also used for other things, a specialist device used by hobbyists like a Raspberry Pi, or just a NAS that is already used for network attached storage. Standards such as the DLNA are widely supported by open-source and proprietary software. If you have music on CD already you can just rip the CD to FLAC — a format that offers some compression without any loss of information — make sure the metadata is up to scratch, and then serve up with your choice of software.
The reason we want to serve up data from a general purpose computing device are three-fold; firstly, you avoid vendor lock-in if you use open standards; secondly, serving up data over a TCP/IP network is entirely commodified and so if we want the best price/performance it makes sense to make use of this; thirdly, ripping audio CDs to data and making sure the metadata is correct is not a 5-minute task so you probably want to only do this once, and if you use an open system you can then re-use these data should you wish to.
The reason we would want to serve up audio from within our own network rather than from the internet is because we then have control over the extent, if any, of the data compression. There is absolutely no point having a high quality amplification and output stage if the input is poor e.g. a 128kbps MP3. With most of the streaming sources it will not be obvious exactly what level of compression is used, and it is in the interests of the provider to compress it as much as possible in order to save bandwidth — this is why YouTube has such a plethora of different quality settings, for instance.