Post on 21-Jan-2016
Dense Distributed Representations
جداگانه ا واحد یک از آمده بدست طالعات
Form of Distributed Representations
زیادی مفاهیم در واحد یا نرون هر شرکت
واحد یا نرون هر از آمده بدست کم اطالعات
ها ویژگی
Dense Distributed Representations
blur بیشتریت برحسفعال نرون blur
Blurblue
blue
Seidenberg and McClelland(1989)
پنهان 200 الیهآموزش از تطابق% 97.3بعد
Dense Distributed Representations
pint, mint, and said.
22 فعال پنهان الیه
pint
Seidenberg and McClelland
(1989)
Dense Distributed Representations
مشابه کلمات
مشابه غیر کلمات
مشترک 11 نرونهمپوشانی50%
مشترک 4 نرونهمپوشانی 18%
یکسان کلمات
یکسان غیر کلمات
یکسان الگوباال تداخل
متفاوت الگوکم تداخل
Dense Distributed Representations
توان ا نمی زیادی نتیجه نرون یک بودن زفعالگرفت
ورودی 2897 دادهغیر 2000 کلمه
یکسان
همپوشانی18% کلمه 3602000*0.18
Dense Distributed Representations
باشد کم خیلی یکسان غیر کلمات همپوشانی اگر
ورودی 2897 داده
همپوشانی1% 0.01*2000
غیر 2000 کلمهیکسان
کلمه 20
Berkeley, Dawson,Medler, Schopflocher, and Hornsby (1995).
پراکندگی نمودار از استفاده
jittered density plots
از استفاده dense distributedتصدیقrepresentation
The plots of the first 24 (out of 200) hidden units are presented in Figure 5A. As is clear from these plots (and equally true of the remaining plots), it is not possible to interpret the output of any given hidden unit, as each unit responds to many different letters. For the model to correctly retrieve a single letter (let alone a list of 6 letters in STM), the model must rely on a pattern of activation across a set of units. That is, the model has learned to support STM on the basis of dense distributed representations. In another analysis, I trained the same network to name a set of 275 monosyllable words presented one at a time. That is, rather than supporting STM, the model learned to name single words. Each input and output was a pattern of activation over three letter units, and the model was trained to reproduce the input pattern at the output layer. After training, it succeeded (_100%) on both trained words and 275 unfamiliar words. Figure 5B presents the jittered density plots of the first 24 hidden units in response to the 275 familiar words. Once again, the model succeeded on the basis of dense distributed representations. These analyses support Seidenberg and McClelland’s (1989) earlier conclusion.
Bowers, Damian, and Davis (2008) recently carried out a set of these analyses on a PDP model of short-term memory (STM) and word naming. The simulations were based on Botvinick and Plaut’s (2006) model of STM that includes a localist input and output unit for each letter and 200 hidden units. After training, the model could reproduce a series of 6 random letters at _50% accuracy, which roughly matches human performance in an immediate serial recall task. In one analysis, we successfully trained their model to this criterion and then computed jittered density plots for all the hidden units in response to all 26 letters (the scatter plots were constructed in response to single letters rather than lists of letters).
Berkeley, Dawson,Medler, Schopflocher, and Hornsby (1995).
Sparse Distributed Coding
نرون کمی تعداد با پیچیده مفهوم یک کردن کد
می شرکت مفاهیم از محدودی تعداد در نرون هر
کند
ها ویژگی
باال یادگیری سرعت
تعمیم در ضعف
denseو sparseتفاوت
ها نرون تعداد
تعمیم قدرت
یادگیری سرعت
sparse کمتر
dense بیشتر
sparse کمتر
dense بیشتر
sparse بیشتر
dense کمتر
coarseو sparseتفاوت
ها نرون تعداد
تعمیم قدرت
مراتبی سلسله ساختار
coarse کمتر
sparse بیشتر
coarse بیشتر
sparse کمتر
coarse کارا
sparse ناکارا