The idea I had was to develop FTP server in which you may replace any part. As a result, I broke the application into the logical modules and designed each class to be extended. I used Java interfaces to allow different implementations of the same features and Spring Framework to wire the application together. All this produced the application configured through Spring XML file that allows user to apply different implementations and configure everything the way she desires. This is the list of components that form this FTP server and their very brief descriptions:
Core is responsible for starting and stopping FTP server and all of its modules.
This is the global data storage that exists in a single instance and contains all the information modules need to exchange. This is co-called Application Scope.
Control Connections perform information exchange between the server and clients. Control Connections represent users.
Data Connections perform file transfers between the server and clients.
All connections added to Connection Pools, Connection Pool executes connections' Self-Service routine and destroys failed connections. There is one pool for Control Connections and one for Data Connections.
Constantly listens on a predefined port for incoming Control Connections. This module creates Control Connections.
Data Port Listener Set
Set of all Data Port Listeners, this module simply replicates core commands to underlying Data Port Listeners moving this functionality away from the Core.
Data Port Listener
Constantly listens on a predefined port for an incoming data connection, this is how the PASV feature is implemented. This module creates Data Connections.
Data Connection Initiator
Establishes a data connection to user's machine, this is how the PORT feature is implemented. This module creates Data Connections.
This is a Virtual File System. The FTP server uses is to obtain data streams mounted to virtual files. The module is called Virtual because it can be mounted to your hard drive, database or any other storage.
Created for every user the Session is available from the Control Connection. Contains user-specific information the rest of the modules require. This is co-called Session Scope.
This module executes user's commands and sends back replies.
There is a special status called POISONED which is used to drop connections. If the core sets its status to POISONED then connection pools poison all the connections. Poisoned control connection is not allowed to read any user input and must die as soon as its output queue (contains information going back the user) becomes empty and the associated data connection finishes. The control connector also poisons every incoming connection. This special POISONED status is required to gracefully shut down the server. Poisoned server does not allow new users to connect and waits for already connected users to finish the task at hand and disconnect before terminating the core. (At the time of this writing Generic Bundle does not use this feature and terminates the server instantly).
Control connector accepts incoming control connections and data port listener accepts incoming data connections. Those modules work in the similar manner: listen on a predefined port for a new connection, once user establishes a connection add it to a connection pool and wait for the next one. Data connection may also be established via data connection initiator. This module does not wait for a user to connect but connects itself to the user machine. Once a connection is established it adds the connection to a connection pool.
Application Scope and Session Scope Storages
There are 2 storage objects available: application scope storage (Core Storage) and session scope storage (Session). Both may hold arbitrary objects. Modules use these storages to exchange information and objects. The application scope storage is a singleton which contains common data. And every user has its own session scope storage which contains user-specific data.
Control connection reads user input. Then it uses the command factory to form a command object based on the input. The command goes into the command processor which executes it. The command performs its logic and forms a reply object. The control connection pushes this reply back to the user. If a command formed from the user input cannot be executed, it is substituted by one of the system commands. Command processor then executes the system commands and a reply is sent back to the user as usual. Most of the system commands contain error messages.
Data Transfer after 150 Reply
FTP spec demands to begin a file transfer (or connect to user's machine) only after the user receives 150 reply from the server. This means that a data connection initiator (module that attempts a connection to the user machine) must wait until a control connection sends off 150 reply. To implement this behaviour I introduced a special attribute to contain amount of bytes-wrote-to-user before control connection posts 150 reply. Data connection initiator attempts a connection to the user's machine only after the control connection's current amount of bytes-wrote-to-user exceeds the value stored in the attribute.
Connection Life Cycle
When a connection is created and configured it is added to a connection pool. The connection contains internal threads for data reading and data writing routines. The third routine, called self-service, is executed by the connection pool. Why? Because this routine is not as critical as the read and write routines and it does not make sense to span a new thread for it. As a result, all connections' self-service routines are executed in the same thread by a connection pool. The connection serves its purpose until it decides to die or the user disconnects. If the connection decides to die, it throws an exception in its self-service routine and the connection pool calls its destruction method. If the user disconnects then the connection calls the destruction method itself from either read or write threads – whichever the first detects this event. Connection pool periodically removes dead connections from its internal list.
The server is configured in the file conf/beans.xml located in your server's home directory. This is the Spring Framework XML file which defines all the objects ColoradoFTP server requires. Since this is the Spring Framework file you may add or replace any object easily provided you are familiar with the Spring Framework. Look into constructor and getters/settest of any bean from the XML file and amend its properties as you see fit. There is also Project's Wiki which is no longer maintained and may contain stale data. But it explains how to configure server and plug-ins and provides XML examples. More than that, each plug-in has a sample XML file which you can look into to understand how to integrate it with your FTP server. Those are available in the GIT repository.
The FTP server logs its activity to the file log/server.log located in its directory. This is a plain text file and you may open it in a text editor. By default the log level set to INFO which outputs very little information. You can change the level to DEBUG to see much more of server's activity. Just edit the file conf/log4j.properties and change INFO to DEBUG: