Özet:
There are continuous changes such as specification changes, equipment breakdowns, fluctuations in the customer orders that manufacturing systems of the 21st century have to deal with on a daily basis. Consequently, the need for self-adjustability to market requirements has elevated. One of the many methods tailored to the requirements of the production sector is holonic manufacturing systems (HMS).In this thesis, the practical implementation of HMS is described. A multi-agent based approach for distributed artificial intelligence is proposed to generate effective and adaptive control mechanisms for the management of dynamic processes in a realistic manufacturing testbed. Herein, intelligent agents within a manufacturing system such as products, machines, and automated guided vehicles (AGV) create a self-controlling network to manage the pickup-dispatching problem of multiple single-load AGVs. This mid-level shop floor problem is addressed with a reinforcement learning (RL) method. The application potential of Q-learning, a broadly used RL algorithm, to a pickup-dispatching problem is investigated. The aim of this study is twofold. First, the study is intended to determine if an AGV agent is able to learn the best dispatching rule regarding a system goal in various cases. This is experimentally investigated by an agent based simulation model. Second objective of this study is to demonstrate the feasibility of adaptive learning abilities of AGVs. These results principally show that the AGVs are able to learn to practice the best dispatching rule and are able to adopt another rule when the initially adopted rule starts to fail. The findings essentially demonstrate that the learning AGV agent is able to adapt itself to the changes in the environment and can learn to favor the application of the best action in a given state.