Size: a a a

Глубинное обучение (группа)

2018 July 06

YB

Yuri Baburov in Глубинное обучение (группа)
well, in few words, rather I don't understand how LSTM could help you at all.
источник

YB

Yuri Baburov in Глубинное обучение (группа)
Let's talk in terms of second FFT transform: at what frequencies your learnable signal mostly is?
источник

kk

k k in Глубинное обучение (группа)
Yuri Baburov
Also, regarding learning stateful behavior -- you can try to sample data: instead of providing each 40 timesteps, give only 10 or 5 or 1. Or instead of measuring every second, do 5x sampling for the data and average 5 labels into 1 (*corrected). This might prevent your kind of overfitting/generalization failure that comes from learning local behaviour instead of global.
Also it's important to remember that there are cases and tasks when a LSTM layer doesn't help much because there's not much reliable correlation in global behaviour.
Here as i undestand you wanna say the data frequecy is too much i should downsample it and beside that instead of using only one time frame it might be better to merge several neighber time frame and then feed the network am i right?
источник

YB

Yuri Baburov in Глубинное обучение (группа)
Again, for a comparison with sounds, frequencies longer than 1 Hz are of no interest.
источник

kk

k k in Глубинное обучение (группа)
Yuri Baburov
Again, for a comparison with sounds, frequencies longer than 1 Hz are of no interest.
Uhu so i was understang wrong my sampling rate is 1khz
источник

YB

Yuri Baburov in Глубинное обучение (группа)
k k
Here as i undestand you wanna say the data frequecy is too much i should downsample it and beside that instead of using only one time frame it might be better to merge several neighber time frame and then feed the network am i right?
These are two alternatives to consider:
learning each second of data independently (like if there's no generalizable signal in frequencies longer than 1Hz),
and the second alternative -- to try how those frequencies are useful, making a coarse grain for your time data.
источник

YB

Yuri Baburov in Глубинное обучение (группа)
k k
Uhu so i was understang wrong my sampling rate is 1khz
That's before the first FFT. And I believe you said that you have then frames of 40 ms, and that you have a label once per second or so.
источник

YB

Yuri Baburov in Глубинное обучение (группа)
aha, no, you have a label for each 40 ms and data for each 1 ms.
источник

kk

k k in Глубинное обучение (группа)
Yuri Baburov
well, in few words, rather I don't understand how LSTM could help you at all.
Would you suggest your preference model instead of lstm? Im not rigid for it and i can try other model as well im usi g rnn based because it's more match with sequence based data
источник

YB

Yuri Baburov in Глубинное обучение (группа)
CNN I think. And a FFT maybe before a CNN.
источник

kk

k k in Глубинное обучение (группа)
Yuri Baburov
aha, no, you have a label for each 40 ms and data for each 1 ms.
Yes its like (40sample x, 2 lables y)
источник

kk

k k in Глубинное обучение (группа)
actually i have used some calculated feature on this 40 ms time frame like FFT and used them instead of raw data and i have implied even Conv1d for them and eventually i saw when i mix it with lstm it gaves me a slightly better result
источник

YB

Yuri Baburov in Глубинное обучение (группа)
Also consider different preprocessing and find out the best. You can try a fixed model like CNN+FC layers and compare different preprocessing before it first of all.
For the sounds, scientists found the best preprocessing a long time ago and training NN to reproduce it as its part isn't rational because this makes the learning much slower and much more data is needed to learn it.
источник

kk

k k in Глубинное обучение (группа)
Yuri Baburov
Also consider different preprocessing and find out the best. You can try a fixed model like CNN+FC layers and compare different preprocessing before it first of all.
For the sounds, scientists found the best preprocessing a long time ago and training NN to reproduce it as its part isn't rational because this makes the learning much slower and much more data is needed to learn it.
That sounds nice to check at first the local pattern matching and then maybe implying temporal model for temporal pattern👍👍
источник

YB

Yuri Baburov in Глубинное обучение (группа)
yeah, indeed
источник

YB

Yuri Baburov in Глубинное обучение (группа)
so a good research would look like finding a good combination of the parameters for the following:
0) choosing a baseline performance for your study (and optionally perform baseline analysis)
1) preprocessing (no FFT, FFT: hann/hamming, frame size, window size, overlap)
2) architecture: finding the best architecture.
3) learning the possible reasons of overfitting and measuring their impact into the final quliaty.
4) "theoretical maximum" quality, probably a kind of analysys of the data variance across people, data noise (maybe by trying to soften the data) and label noise (how often similar data leads to different labels).
You can take a small part of dataset for most of these studies, so the network would train very fast (in several minutes on modern GPUs).
источник

YB

Yuri Baburov in Глубинное обучение (группа)
I'd also suggest to take initial values for all parts from other people's works.
источник

kk

k k in Глубинное обучение (группа)
Thanks a lot yuri im thinking now how can i combine raw data and calculated feature at the same time i mean some cnn with several filters for raw data and some domain related fetures like heart rate at same time is it possible or do you recomend it?
источник

YB

Yuri Baburov in Глубинное обучение (группа)
yes, absolutely. you can approximate heart beats with a linear, a cosine or an exponential decaying function, I think.
источник
2018 July 07

EZ

Evgeniy Zheltonozhskiy🇮🇱 in Глубинное обучение (группа)
Yuri Baburov
yes, absolutely. you can approximate heart beats with a linear, a cosine or an exponential decaying function, I think.
if your heartbeat is exponentially decaying, you've got some problems
источник