I am trying to make a python package distribution for some popular Fortran codes in my field. I want it to use the most standard approach with a setup.py file. The related qustion was helpful for learning how to wrap Fortran extensions.
When using this approach, I noticed some confusing behavior when mixing setuptools and numpy.distutils. Is it bad practice to mix the two? As of 2015, it seems preferable to use setuptools as much as possible.
However, I would like to build Fortran extensions in a way that is compatible with numpy. So I would like to import from numpy.distutils to get Extension and setup.
I'm using the following basic approach:
from setuptools.command.develop import develop
from numpy.distutils.core import Extension, setup
ext_modules=[Extension("my_package.fortran_mod", sources=['src/fortran_mod.f'])]
class MyDevelop(develop):
def run(self):
my_script()
develop.run(self)
setup(
...
ext_modules=ext_modules,
cmdclass={'develop':MyDevelop})
This seems to work but I have questions.
- Is it generally good practice to mix
setuptoolsandnumpy.distribute? - Does the order I import them matter? Should I always import
setuptoolsfirst? - Is there an official up-to-date tutorial for packaging extensions to
numpy? Perhaps even one with some discussion Fortran extensions?
Some links
https://www.youtube.com/watch?v=R4yB-8tB0J0
http://www.fortran90.org/src/best-practices.html#interfacing-with-python
nogilis very important if you want to be able to delegate down to native code and make use of threading/multi-process. Your MPI example is a perfect time to release the GIL!ctypes(just take care of the function-names which, depending on your compiler, get a trailing underscore). Recently I have become a fan of pybind11 which allows you to write the entire Python module from C++, with the bonus advantage that the ancient Fortran code gets a modern front-end.