The upshot of this ubiquitous hash table usage is that just about anything can be modified. In these languages a class is just an object and an object is just a hash table, so classes can grow new methods just about any time. Similarly, modules and global objects can grow new capabilities at will. The practice of modifying these global objects at runtime is called, aptly enough "monkeypatching".
Just because monkeypatching is possible, doesn't mean it's always a good idea. It can be very hard to manage complexity when every module you import or method you run has the capability to modify how everything works. This is the same kind of objection to global variables: with great power comes great responsibility. Don't just monkeypatch for the heck of it.
While each of the 3 titular languages has the capability of monkeypatching, each culturally has a different take on it:
Python shuns monkey patching in the normal course of
programming. Don't modify other people's objects, don't modify other
modules at runtime. Due to a quirk of CPython's root object (
being implemented in C, it's not a target for monkeypatching, which
reduces the temptation somewhat. There's no way to add a method or
attribute to every existing object.
When testing however, monkeypatching becomes very convenient. The py.test testing framework, for instance, has a monkeypatch fixture that allows one to monkeypatch a module or class before a test, and return it to its original state after the test. Very convenient.
Aside from testing however, Python is the strictest about allowing monkeypatching. Even backports of libraries etc are done by distributing them in a separately imported module, rather than providing "polyfills". The fact that the practice is completely possible, but almost never used by experienced Python programmers is a testament to how much "culture" factors into the end result with a dynamic language.
Addendum: gevent is probably the largest exception to the "no monkeypatching" rule for Python. It uses monkeypatching to intercept nearly all I/O calls and to allow them to be put into a non-blocking IO loop. This is so convenient, even the Python community has begrudgingly accepted it. However, the inherent distaste for monkeypatching has led to the development of several async alternatives for Python like tornado, and asyncio which make asychronous I/O explicit. And, as someone mentions each time it's brought up, even gevent can be used without monkeypatching.
to monkeypatching. It is used in day-to-day development, but never
"just because". The primary reason monkeypatching is used in
global object like
Array when you are exactly replicating
the api of a new addition to the ECMAScript standard that you'd like
to use in your code, but can't rely on all browsers having available.
The Prototype framework is a major exception to this. It monkeypatches all DOM elements to provide an array of new capabilities. This strategy fell out of favor, however, and was replaced by the technique (used by JQuery and underscore.js) of wrapping native DOM objects to provide more capabilities, rather than injecting them. Even Prototype is now planning on moving away from monkeypatching DOM objects in this way.
Finally, we come to Ruby, whose community has wholeheartedly embraced
the wild world of monkeypatching in the normal course of
development. This usage goes right to the core language libraries. For
instance, importing the
Set module monkeypatches the
.to_set method. In addition, class definitions are "open" in
Ruby. You can always add new methods and attributes to a class in
whatever module you want by using normal class definition syntax.
While there is some rhetoric out there in the Ruby community that monkeypatching should be done with great care, in practice it seems that it's done whenever it's convenient. Likewise, in practice it seems Ruby developers aren't bothered by the occasional name collision. If you test your code well, you can find the problems and avoid trouble. And the plethora of methods added by all sorts of libraries makes coding very concise.
The Ruby community has taken the "embrace the chaos" idea an run with it, and it seems they're able to manage just fine. Perhaps the hand-wringing about monkeypatching in other language communities is unwarranted. Or perhaps it exacts a cost, but it's one that is more than paid for by the convenience provided by ubiquitous monkeypatching. In either case, it's unlikely to vanish from Ruby any time soon.
What I'd like to see
Personally, my background is in Python development, so I lean a bit more towards that end of the spectrum in my opinion of monkeypatching. But, it's hard to take such a hard-line view when you see the extremely elegant and concise code that comes from Ruby objects being stuffed full of useful methods from every object that comes along.
What I think could be useful is a kind of "scoped" monkeypatching, to control who gets the patched behavior and who doesn't.
Consider this example:
module MyModule require 'Set' [1,2,3].to_set # works! end module OtherModule [1,2,3].to_set # method missing! end
In this way, you could get the benefits of monkeypatching, without causing problems for libraries you import that have no idea what you're going to add to their objects. This is a lot like inheritance, and could probably be implemented as syntactic sugar for inheritance. But it's a pattern I do think could be used to strike a balance effectively.