A neural network (NN), an ensemble of interconnected neurons, is able to store memories in the so-called “fixed points”, steady activity patterns, which are the solutions of the neural dynamics and associated to the individual memory items. In past years, several works have been devoted to determine the maximum storage capacity of NN, especially for the case of the Hopfield network, the most popular kind of NN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In a recently published paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of NN with high storage capacity and able to retrieve the desired pattern without distortions.