In a recent release of blender a safeguard for python scripts has been introduced. This is to prevent python script from doing any damage to the system. Although I doubt that there are any blender users that want to wreck havoc. Blender now has a built in system for protecting against malicious scripts. I do not trust this system though. How can I determine if a python script is hostile manually?
So far this is what I have come up with:
import os there is no reason that a python script for blender needs this module, this allows the user to run cmd/terminal commands.
What other things are there to watch out for?

os.pathis part ofosand it's pretty useful. $\endgroup$