2010-10-29 Since more or less one week, I have a new tool which I now fre- quently use. #!/bin/sh if [ $# -eq 0 ] ; then find -type f | sort else re="$1" shift find "$@" -type f | sort | xargs -d '\n' grep -I "$re" fi I call it `rr', for ``recursive regexp'', because this is what it does. I have it to replace `grep -r', which I consider conceptu- ally bad. The common use is: rr RE This does a recursive grep for RE, starting in the working direc- tory. I use this a lot to find definitions and usage of identif- iers. Others use something like ctags for this. All further arguments are simply added to the find call, hence one can change the starting directory (or more of them) of the recursion: rr RE DIR When invoked without arguments, a sorted recursive listing of all files within the directory tree, rooted at the working directory, is printed. The tool was more of a ``quick shot'' but it prove so much useful to me that I probably should rethink its design and implementa- tion thoroughly for it to become more sophisticated. I know about the -0 option of xargs which avoids some strange corner-cases, but I don't like to use it. The origin of the problem was allowing the newline to be a valid file name charac- ter. Now, all line-based tools suffer from this decision. I simply assume that there are not file names containing a new- line. While this leaves the corner-case uncovered, it seems to be ``the right way''. Regarding nmh: You better don't look into `sbr/m_getfld.c'. And don't ever create such a thing! I'm so glad -- I can't tell how glad -- that today's computers are fast enough to avoid such speed optim- izations. http://marmaro.de/lue/ markus schnalke