The starting point is the ability to run the LLM as you wish, for any purpose - so if a license prohibits some uses and you have to start any usage with thinking whether it's permitted or not, that's a fail.
Then the freedom where "source" matters is the practical freedom to change the behavior so it does your computing as you wish. And that's a bit tricky - since one interpretation would require having the training data, training code and parameters; but for current LLMs the training hardware and cost of running it is a major practical limitation, so much that one could argue that the ability to change the behavior (which is the core freedom that we'd like) is separate from the ability to recreate the model, and would be more relevant in the context of the "instruction training" which happens after the main training, is the main determiner of behavior (as opposed to capability), and so the main "source would be the data for that (instruct training data, and the model weights before that finetuning) so that you can fine-tune the model on different instructions, which requires much less resources than training it from scratch, and don't have to start with the instructions and values imposed on the LLM by someone else.
The starting point is the ability to run the LLM as you wish, for any purpose - so if a license prohibits some uses and you have to start any usage with thinking whether it's permitted or not, that's a fail.
Then the freedom where "source" matters is the practical freedom to change the behavior so it does your computing as you wish. And that's a bit tricky - since one interpretation would require having the training data, training code and parameters; but for current LLMs the training hardware and cost of running it is a major practical limitation, so much that one could argue that the ability to change the behavior (which is the core freedom that we'd like) is separate from the ability to recreate the model, and would be more relevant in the context of the "instruction training" which happens after the main training, is the main determiner of behavior (as opposed to capability), and so the main "source would be the data for that (instruct training data, and the model weights before that finetuning) so that you can fine-tune the model on different instructions, which requires much less resources than training it from scratch, and don't have to start with the instructions and values imposed on the LLM by someone else.