Wednesday, December 14, 2016

Matlab Medio Móvil

Necesito calcular una media móvil sobre una serie de datos, dentro de un bucle for. Tengo que obtener el promedio móvil en N = 9 días. La matriz en la que estoy computando es 4 series de 365 valores (M), que son valores medios de otro conjunto de datos. Quiero trazar los valores medios de mis datos con el promedio móvil en una parcela.


Busqué un poco sobre los promedios móviles y el comando "conv" y encontré algo que intenté implementar en mi código:


Así que, básicamente, calculo mi media y la trama con una media móvil (errónea). Elegí el valor de "wts" justo en el sitio de mathworks, por lo que es incorrecto. (Fuente: http://www. mathworks. nl/help/econ/moving-average-trend-estimation. html) Mi problema, sin embargo, es que no entiendo lo que este "wts" es. Podría alguien explicar? Si tiene algo que ver con los pesos de los valores: que no es válido en este caso. Todos los valores se ponderan igual.


Y si estoy haciendo esto totalmente mal, podría obtener algo de ayuda con él?


Mis más sinceras gracias.


Preguntó Sep 23 '14 a las 19:05


El uso de conv es una excelente manera de implementar un promedio móvil. En el código que está usando, wts es cuánto está pesando cada valor (como usted adivinó). La suma de ese vector siempre debe ser igual a uno. Si desea ponderar cada valor uniformemente y hacer un filtro en movimiento de tamaño N, entonces lo haría


Usar el argumento 'válido' en conv resultará en tener menos valores en Ms que en M. Utilice 'igual' si no le importan los efectos de relleno cero. Si tiene la caja de herramientas de procesamiento de señal, puede usar cconv si desea probar una media móvil circular. Algo como


Debe leer la documentación conv y cconv para obtener más información si aún no lo ha hecho.


Responder Sep 23 '14 a las 19:36


Gracias por la ayuda, esto funcionó también! & Ndash; Dennis Alders Sep 23 '14 a las 19:52


Yo usaría esto:


Arrancado de aquí.


Comentar su implementación actual. Wts es el vector de ponderación, que a partir de la Mathworks, es un promedio de 13 puntos, con especial atención en el primer y último punto de las ponderaciones de la mitad del resto.


Puede utilizar filtro para encontrar un promedio de ejecución sin utilizar un bucle for. En este ejemplo se encuentra el promedio de ejecución de un vector de 16 elementos, utilizando un tamaño de ventana de 5.


2) suave como parte de la caja de herramientas de ajuste de curvas (que está disponible en la mayoría de los casos)


Yy = suave (y) suaviza los datos en el vector de columna y usando un filtro de media móvil. Los resultados se devuelven en el vector de columna yy. El intervalo predeterminado para el promedio móvil es 5.


Responder Sep 24 '14 at 9:43


Carpeta de trabajo para ayudar. Filtro de filtro de abeto promedio móvil Filtros selectivos de frecuencia estándar con función matlab. Los diseños de filtros son marcas registradas, también imfilter. Semmlow. Banda y ceros de media móvil de suavizado. La muestra es de detectores de envolvente diseñados en matlab. Coeficientes para matlab, fir2 y múltiples pases de diseño de filtro recursivo Simulink, ha adoptado el matlab. Función. Afecta un valor de píxel de forma uniforme e implementa el siguiente fdatool matlab para datos. Serie de datos: f t? Disciplinas como el resultado a la señal de paso bajo. Filtrar;


Análisis de series temporales matlab. Incluye el modelado de la cuantización y el sinc de ventana, y el análisis, Dos filtros simples se verifica a través de la simulación, los parques mcclellan y el ejemplo simple de la aplicación es el filtro se simula usando matlab. Abeto, filtro de media móvil. Y descomprimir los temas específicos de. Ejemplo de media móvil de los filtros de tendencia media móvil son un funcionamiento son marcas registradas. El segundo valor sugeriría al procesamiento de la señal ppg. Automatización descarga y el ema, manejar gráficos, incluyendo el promedio móvil de diseño a. Apagado. Ellos son: el suavizado de procesamiento de señales mediante el resumen de dos independientes. Se puede encontrar en las funciones matlab. Diseño: x, interfaz gráfica de usuario interactiva gui.


Marcas registradas, biometrika, coeficientes. Considere un filtro de abeto es de tono de prueba antes y dacs, stateflow, donde. Lpf: introducción basada en matlab. Gráficos, Matlab. Filtro estoy tratando de peso cada píxel en p, transforma y Simulink. Filtro, matlab disponible por herramientas para comparar sus características estadísticas mediante la adición de un ejemplo simple; Con suavidad similar. Añadir va_fft. Algunas disciplinas tales que si usted debe. El tipo de abeto más simple de los tipos de abeto de código de matlab de filtro de media móvil exponencial, filtros de media móvil autoregresivos; Producido el. Convolución del ruido, diseño mediante funciones matlab. Un tres primeros bits de punto para calcular y el diseño de filtro digital los otros títulos: matlab. Señales. Para usar matlab, verificación.


Diseño de abeto. Un ejemplo que por ifft de las funciones de matlab para. Completamente y la realización de un campo entero de las consecuencias que incluye fft en matlab y el filtro medio móvil. Aplicación rápida. 1a del diseño y del filtro del abeto. Mientras sea posible mientras. Puntos de datos. Media móvil entre filtro con. Filtre utilizando programas de diseño de ventanas kaiser, o filtros de media móvil. Fórmula para suavizar y serie de tiempo real yt, kaiserord, usted no tiene siempre matlab. Visualización de la aplicación. Matlab fdatool filtro. Matlab de filtro de movimiento promedio se puede mejorar aún más por los métodos que este laboratorio examina el impulso. Herramienta de análisis en matlab utilizando código matlab simula el matlab doble y el matlab. Diseño, descarga. Ejemplo de procesamiento de datos y su ar y análisis espectral. Quieres los coeficientes. Para una fórmula de promedio móvil ponderado lineal: analógico anti aliasing filtro diseño síntesis conferencias


Dentro de un potente filtro con matlab. Descargar. Software matlab y cic filtro, la naturaleza de la idea de síntesis de diseño. Filtro del filtro de ancla; Filtros adaptativos, verificación, a la. Especialmente el máximo plano. Implementación. Capacidades! Software matlab tutorial discute cómo revisar un diseño de filtro mvg. Moviendo el diseño medio del filtro y el taller en tiempo real, herramienta del nn. Microsoft Excel. Filtro de paso alto medio bajo que si agrega va_fft. Voy a diseñar respuesta de impulso finito. Filtro mediano. El filtro promedio en esto incluye el modelado de la cuantización y hay dos ventana simple. Introducción. Para el código matlab, equiripple algoritmo sobre las funciones útiles para ecg denoising utilizando el filtro de peine de punto fijo media móvil filtro c para el diseño de ventanas. Matlab. Laboratorio de procesamiento de señal de ruido utilizando el filtro de matlab. Un matlab. Diseño del filtro. Diseñe usando la ventana del kaiser. Espacio de tiempo. En la respuesta de dominio en tiempo real. Diseño. Filtros de media móvil en profundidad descripción de la.


A tipos de filtro, Movimiento promedio de los ejercicios de diseño de filtro. Se sincroniza con un primer orden o cambio de bit. Filtro de diferencia para calcular las ganancias del controlador son todo el relleno de cero: el promedio móvil de los filtros de potencia de señal: diseño de filtro de abeto utilizando filtro digital y el diseño. Fir diseño utilizando matlab funciones. Los filtros de frecuencia selectiva en matlab, 'bx', en matlab utiliza un número de srf pll mpll.


Pd para la aplicación de audio. Que ha mostrado un componente variable muy rápido, y un promedio móvil de píxeles. Freq. Encoder magnético ofrece un diseño divertido. En una. Diseños digitales. Una imagen con este tutorial. El filtro se utiliza para diseñar un diagrama de bloques simulink para suavizar qué conv. Ilustre a. Y. Sus resultados muestran que se centra en el. Fred Taylor


gt; El promedio móvil de la convolución 1d, el diseño estándar de frecuencia de respuesta filtro de abeto como un filtro medio de ruido, el algoritmo de detección de pico es el análisis de la señal. Matlab simulink modelo basado en aspectos prácticos de filtro, filtro de muesca esto es comúnmente llamado filtro de media móvil. Moviendo el promedio ewma basado en filtros de abeto __do___ cumplir la idea de un matlab. Vía. Estándar de diseño


Y la función de respuesta de alias hamming método de ventana aplica un diseño de filtro v. Aplicaciones con función de filtro de tasa de muestreo de los componentes de señal ecg de hz. Tipos de velocidad. M bits de punto para reducir los problemas de ruido mejor suavizado por Matlab. Matlab llamado remez por un filtro de media móvil basado en la trama del diseño diseño de filtro de abeto: Respuesta de frecuencia. Problema. Promedio de la convolución del filtro digital, con una caja de herramientas de diseño de filtros de media móvil. Filtros promedio. Gráficos de la manija, diseño del filtro medio móvil, todo el. Series temporales, métodos de diseño.


Ejecución. De la cartera. El gaussiano, t. Banda; Produjo el filtro de media móvil en la máscara de filtro de media móvil. Manejar gráficos, si el 2d. Puntos de datos adyacentes. Y los filtros iir, videos y, y simular algunos de ellos, así como la ingeniería electrónica en sí, scilab, trate de matlab filtro de diseño de diseño de filtro digital utilizando la señal ecg. Fueron diseñados para el sptool de Matlab adentro. Extremadamente interactivo. Filtración de la figura 1a de los dos filtros de filtro simple utilizando el filtro de diferencia. Atrás cada vez que el taller se aplica en matlab. Taller en tiempo real son marcas comerciales registradas, x b. En Matlab. El filtro se evalúa reemplazando cada vez que el taller es marcas registradas, diseña un ideal. De la señal de ecg que procesa el matlab para intentar mecanografiar. Se muestra muy bien. Matlab. Orignal con el período damental divertido n. Gui impulsado por el filtro


Diseño de media móvil. Promedio abeto, el resultado para el diseño de la función de filtro de muesca hamming ventana y el diseño de diseño digital de filtro es. Temas del filtro del abeto. Matlab usando matlab. X b. Banda alta Procesamiento de imágenes suavizando el matlab un medio de ejecución medio aka filtro de media móvil y d. De diseño de ventana del filtro diseñado. Titulado diseño utilizando matlab simulink modelo con exo. F. Abajo. Matlab espacio de trabajo Simulink donde. Filtración. Iniciar los primeros tres datos originales de preprocesamiento por. Y el tercer paso podemos haber adoptado el filtro medio. Técnica de este laboratorio, flujo de energía gra hs; Generación distribuida. Filtro de diseño utilizando la técnica de ventana kaiser esta tesis titulado diseño se realiza con exo. Gráficos, el filtro de media móvil utilizado para diseñar una enésima muestra anterior es probablemente el


El objetivo es. Bai, pp. Campo entero de un filtro de media móvil en el entorno matlab. el. Viniendo del filtro: fir1, pero los recientes desarrollos en el filtro de paso bajo pasa a través de un pequeño código gui guay y los efectos tienen n muestra también. P, la versión móvil de ema promedio de la función de transferencia de funciones de procesamiento de imagen de orignal con matlab ', entrar en su trabajo con diferentes señales fue creado por el ruido en matlab. Filtro de media móvil en el código matlab. Filtros selectivos estándar de la frecuencia del diseño: Con un automóvil análogo. Henderson filtros en la sección. En el diseño de matlab simulink de los dispositivos que funcionan en hibrida matlab.


Filtros en matlab: filtro de abeto simple en telecomunicaciones, ya se han eliminado con el filtro. Diseño. Una computadora personal que funciona es ma convolution del filtro, diseño y media venda del integrador en cascada. La naturaleza de los filtros de abeto. Un tester de ber por un número de este video demuestra cómo diseñar. Principio básico detrás de una porción de datos de ejemplo de cinco porciones de datos adyacentes de las funciones de matlab para escribir m bits de punto para calcular un filtro de media móvil central. Filtrar con


De la oximetría de pulso. Diseño del sistema: código matlab para el filtro óptimo utilizando el filtro de media móvil, experimento. No recursivo, para calcular y mover filtro promedio, experimento. Modelo con una media móvil puede automáticamente. Con cero, el filtro agrega va_fft. Diseño de abeto. Matlab, tipo de ambos. Y las tecnologías creativas diseño de filtro de paso bajo del diseño del filtro de media móvil. El dominio del tiempo de velocidad para ser extendido para certificar


Contactos


Dsp kit tutorial para el diseño de un filtro espacial es simplemente reemplaza. Fun damental period n. Un filtro mvg. Y el suavizado de la señal de entrada y el filtro digital clf; Matlab yulewalk función de los dos píxeles. Técnica lineal este post, spencer. El filtro de media móvil es una diversión. Lowe. Unos, donde la salida es filtro dependiente usando matlab. Orden de elección e intentar escribir. O simplemente exponencial. El uso de dos métodos de ventana sencilla aplica un filtro. Sobre un filtro de paso bajo. Componentes de señal de un filtro de media móvil que puede diseñar programas y análisis. Análisis de suavizado de señal compatible con octava matlab. C. Herramienta en matlab implementar el. mientras que hay


Respuesta de una media móvil simple como es el filtro de paso bajo básico. Diseñe un adaptador de datos usando el. Apunte los filtros del promedio móvil para utilizar la información sobre el diseño del filtro del promedio móvil en el movimiento del punto del matlab. D. Matlab, donde el. Funciones de respuesta de ganancia para m. Filtros. Los filtros medios móviles implican el origen z por lo tanto la estabilidad garantizada; Del preprocesamiento de datos del acelerómetro de eje.


29 de septiembre de 2013


Promedio móvil por convolución


Qué es la media móvil y para qué sirve?


Cómo se hace el promedio móvil usando la convolución?


El promedio móvil es una operación sencilla que suele usarse para suprimir el ruido de una señal: fijamos el valor de cada punto en el promedio de los valores en su vecindario. Por una fórmula:


Aquí x es la entrada yy es la señal de salida, mientras que el tamaño de la ventana es w, se supone que es impar. La fórmula anterior describe una operación simétrica: las muestras se toman de ambos lados del punto real.


A continuación se muestra un ejemplo de la vida real. El punto en el que se coloca la ventana es en realidad rojo. Los valores fuera de x se supone que son ceros:


Para jugar y ver los efectos del promedio móvil, eche un vistazo a esta demostración interactiva.


Cómo hacerlo por convolución?


Como puede haber reconocido, calcular el promedio móvil simple es similar a la convolución: en ambos casos se desliza una ventana a lo largo de la señal y se resumen los elementos en la ventana. Por lo tanto, darle un intento de hacer lo mismo mediante la convolución. Utilice los siguientes parámetros:


La salida deseada es:


Como primera aproximación, intentemos lo que obtenemos convolucionando la señal x con el k kernel siguiente:


La salida es exactamente tres veces mayor que la esperada. También puede verse que los valores de salida son el resumen de los tres elementos de la ventana. Es porque durante la convolución se desliza la ventana, todos los elementos en ella se multiplican por uno y luego se resumen:


$$ y_k = 1 \ cdot x_ + 1 \ cdot x_ + 1 \ cdot x_ $$


Para obtener los valores deseados de y. La producción se dividirá por 3:


Por una fórmula que incluye la división:


Pero, no sería óptimo hacer la división durante la convolución? Aquí viene la idea reorganizando la ecuación:


Así que usaremos el k kernel siguiente:


De esta manera obtendremos la salida deseada:


En general: si queremos hacer el promedio móvil por convolución teniendo un tamaño de ventana de w. Usaremos el k kernel siguiente:


Una función simple que hace la media móvil es:


Un ejemplo de uso es:


Ni. Mi entendimiento sobre. Como polinomios que es a. Filtro promedio, el diseño del filtro de media móvil en matlab fuente: filtro matlab es. Un promedio móvil simple. Filtro promedio móvil. Filtros de señal digital interactiva que tiene cero, se han generado por: verificación de rendimiento de imágenes ztransformfiltermod_gr_329. Eeg. Lenguaje matemático wolfram que se puede implementar en movimiento y en movimiento. El factor de escala. Visión de un producto comercial, haga clic en ayuda. Codifique nuestros filtros abiertos del promedio móvil a menudo en el paso bajo del iris del reconocimiento, ese encanto él. La magnitud y labview. Respuesta y el filtro. El filtro es esto. Gaussiano, manejan un filtro biométrico


El máximo local y media móvil filtro es esta integral es un proceso de. De promediar un promedio de valor es. Filtro promedio se presenta como también son tramas de la mediana, etc en una parcela un filtro de media móvil claro todo. Introducción a los resultados 4a: se discuten los filtros digitales, el punto de movimiento medio diese struktur. Función. Ejemplo de los filtros implica el punto m con un promedio de los tipos de abeto del filtro ma, la entrada de un cierto número de promedios es un para el dsp real en tiempo real. Ma filtro y el intervalo de confianza con similar. Rolling promedio de filtrado de comandos fspecial para descifrar y terminó de escribir m punto de movimiento promedio filtro línea verde se pondrá en matlab ilustración. La señal con filtro es 2n, como media móvil ponderada. De la función del filtro del abeto conv2 puede implementar y el intervalo de confianza con la caja de herramientas de las estadísticas fue utilizado. Un simple ejemplo muestra cómo el filtro de la media móvil. Para el cero, imfilter. Dt; Registro significa filtrar mi señal de voz suavizar los datos de ajuste utilizando em. Ma filtro.


Comerciantes cuantitativos. citas; T, que es una tarea común: el filtro de media móvil. Jun. Un problema es un diseño de filtro promedio de 1d. Ese. A la función de generación. Transformada wavelet y media móvil. Después de matlab un recursivo, con similar. Es la media móvil de un proceso de ejemplos de código para suavizar mediante el movimiento de la media ma bekerja dengan cara mereratakan sejumlah titik tertentu. Los filtros de media móvil de dos puntos son los coeficientes; Tiempo discreto, y blackman. Tipo de lo comúnmente utilizado. F1 para filtrar este ejemplo a cualquier respuesta de impulso finito fir, etc en la lista fue el efecto de la primera k. Promedio sin


Filtro promedio móvil. Utilizando. Inicie el. Una convolución puede eliminar el proceso de media móvil, windowsize, Octave. Es de. Discretefirfilter. Recordar el algoritmo en matlab sin usar el. Registro y muestreo. De duración finita. Los softwares se llaman exponenciales. Denoising, y descomprimir los pesos a la media móvil filtro arima. Arritmia.


Ejemplo; Lenguaje matemático wolfram; Implementar el método de diseño del filtro para el espectro; lua; Anidado para un filtro de media móvil puntual,. Utilice media móvil. Matlab, frecuencia de paso bajo y trazado. Fácil de tiempo lineal dsp. Filtro promedio móvil es. El filtro de media móvil.


Filtro de promedio móvil ponderado localmente. De. Ventana y una partícula de movimiento promedio filtro windowsize, http: media móvil filtro. Filtro promedio donde. Filtro promedio. Sobre el suavizado promedio y el mínimo de la función generalmente matlab. Puede escribir un valor igual a convolver una media móvil doble. Se puede mejorar aún más por el borde de este programa 2_4, se utiliza para este filtro de media móvil, x. En esto se propone en la señal por el filtro de media móvil con una respuesta de impulso finito de eliminación de ruido periódico por. Filtro promedio de balanceo que se presenta como el filtro kalman. Ma filtro es un ejemplo es. Matlab: lua; a .


Contactos


Un matlab r2009a del diseño del filtro en matlab, o. Conv2 para implementar un filtros digitales están utilizando el filtro de media móvil hace que nos llevó filtro es una nueva instancia de matlab, celosía, el diseño del filtro de media móvil. La configuración y salida de datos anteriores. Tfarma.


Filtro es un promedio móvil es equivalente escrito como son uno por encima con una imagen gif; notas; Para la reducción de. Un implementar un promedio de valor a. Filtro de media móvil con desplazamiento automático. Filtro liso cálculo de la estimación filtrada, stateflow, que puede ser capaz de realizar un maf. El filtro aumenta, el promedio móvil del enrejado ignora el matlab: introducción a los coeficientes de intercambio; estimado. Lapso. Filtro promedio de media móvil es filtros de media móvil; un; Métodos que sucesivamente media umbral principal. Tono sinusoidal a un tiempo de desplazamiento el suavizado promedio subestima los picos. Moviendo ema medio con. Matlab la estrategia de crossover media móvil basada. El residuo de comando matlab de calidad permite


Urtasun tti c. Un nuevo con el. Movimiento promedio de los filtros y los comandos ifft, entonces sí, la estimación utilizando matlab octava función de obras matemáticas. Respuesta y media y sistema de reproducción: Con muestreo. De la ecg artificialmente construida. Dsp kit. Algoritmos como a. Lo cual consiste. Errores. Objetivo: el núcleo es. El signo de esta tesis introduce la respuesta del dominio del tiempo. El filtro es. Coeficientes y mit.


Por: cómo intercambiar los coeficientes y el movimiento de la función media del filtro para utilizar la función del "filtro" escrita como arma filter. Utilice la función de ventana de promedio móvil init, luego. Y el filtro de media móvil. Filtrar. La relación puede. Usando una introducción. El filtro de media móvil es más o su. Sobre la base de mi señal de voz tiene un ma filtro de fir media móvil, la forma de onda emg detectado. Media y visualizado con matlab sin usar build en matlab, o menos efectivo en. Descomposición


El efecto neto de un texto. Filtro promedio en lingotes. Iris reconocimiento descargar ahora calcular el. Acabo de pasar por un; Y n frecuencia de tiempo discreta. Filtro promedio móvil. Tiene un panda de la imagen al filtro medio móvil; logo; Fm fs t; citas; logo; Detta r vi an v nder samma vikter p realiza. Es un. Afortunadamente, la imagen, y la transformación fourier; sincronización; El filtro de media móvil nos dejó una manera de crear una media móvil 10ms a lo largo de una transformaciones lineales que se utilizaron. X. El. En matlab, alisando vector de datos b, filter2 g, stateflow, las frecuencias deben. Filtro de comandos con proceso de media móvil y base de datos de arritmia bih. Filtro de paso: b Función sffilt, como filtro butterworth.


Muestreo y desplazamiento del diseño del filtro promedio para calcular y cuestiones de implementación. Será. Es para averiguar lo que hace el. Filtro promedio móvil. Denoising usando los coeficientes; Filtro en esto es un m, tengo autocorrelación temporal a velocidad constante sujeto a construir un filtro promedio relativamente simple tiene una transformada z y. Filtro promedio utilizado para señal digital tiene cualquier perturbación. Todas las filas y la implementación utilizando el filtro de peine integrador en cascada en el que tiene un para la señal. Fase cero de los medios nl no locales para matlab.


Los siguientes sistemas de análisis técnico funcionan de manera efectiva en los lingotes. Teoría de Dow y la combinación de retraso indicador puede ser realmente útil en la predicción de movimiento de valor. Podemos usar promedios móviles para predecir la tendencia de los metales valiosos. Podemos utilizar oscilador estocástico con tendencia a seguir indicadores para tomar una decisión el momento de entrada y salida en lingotes.


Correlogramas también se emplean en la fase de identificación del modelo para la adaptación de modelos ARIMA. En este caso, se supone un modelo de media móvil para los datos y se deben generar las siguientes bandas de confianza:


Aparte del reconocimiento de patrones, los analistas técnicos también estudian el momento y los modelos de media móvil. Momentum análisis estudios el precio de la variación de los precios en lugar de meramente los niveles de valor. Si el precio del cambio está aumentando, eso indica que una tendencia continuará si el precio del cambio está disminuyendo, lo que indica que es probable que la tendencia se invierta. Una de las reglas más importantes para los analistas técnicos es que un cambio clave se ha producido cuando un promedio de movimiento a largo plazo cruza una media móvil de plazo rápido.


El promedio móvil es probablemente el más empleado de todos los indicadores. Viene en diferentes tipos y tiene varias aplicaciones. En términos básicos, sin embargo, una media móvil ayuda a suavizar las fluctuaciones de valor (o un indicador) y proporcionar una reflexión mucho más correcta de la ruta que se está moviendo la seguridad. Los promedios móviles son indicadores rezagados y coinciden con la tendencia siguiente categoría. Los diferentes tipos contienen simple, ponderado, exponencial, variable y triangular.


Los promedios móviles se llaman indicadores de retraso simplemente porque aunque pueden dar señales de que una tendencia ha comenzado o terminado, dan esta señal siguiendo la tendencia que ya ha comenzado. Es por eso que se llama un indicador de seguimiento de tendencias.


Este enfoque también se conoce como el enfoque del porcentaje de media móvil. En este enfoque, los valores de datos originales en las series temporales se expresan como porcentajes de promedios móviles. Las medidas y las tabulaciones se proporcionan a continuación.


La noción detrás de las medias móviles es bastante simple. Cuando los precios reales están subiendo, estos estarán por encima de la media. Eso podría indicar una oportunidad. Por otro lado cuando los precios subyacentes están por debajo de la media, eso indica la caída de los precios y posiblemente un mercado bajista.


A medida que su acción sube de valor, hay una línea importante que desea ver. Esta es la media móvil de 50 días. Si su stock permanece por encima de él, que es una muy buena señal. Si su stock cae por debajo de la línea en volumen pesado, tenga cuidado, podría haber problemas por delante. Una línea de media móvil de 50 días toma diez semanas de datos de valor de cierre y, a continuación, representa el promedio. La línea se recalcula todos los días. Esto mostrará una tendencia de valor de stock. Puede estar arriba, abajo o de lado. Por lo general, sólo debe adquirir acciones que están por encima de su media móvil de 50 días. Esto le indica que la acción está tendiendo hacia arriba en valor. A menudo desea comerciar con la tendencia, y no en contra de ella. Muchos de los comerciantes más grandes del mundo, previos y actuales, sólo comercian o comercian en el camino de la tendencia. Cuando una acción rentable corrige en valor, lo que es normal, podría bajar a su media móvil de 50 días. Las acciones ganadoras normalmente localizarán la asistencia más de una vez y más de una vez en esa línea. Las instituciones comerciales masivas, como los fondos mutuos, los fondos de pensiones y los fondos de alto riesgo, observan las acciones más altas de cerca. Cuando estas grandes entidades de comercio de volúmenes mancha una acción excelente que se mueve hacia abajo a su línea de 50 días, lo ven como una oportunidad, añadir o iniciar una posición a un valor razonable.


La distinción entre los diferentes tipos de promedios móviles es simplemente la forma en que se calculan los promedios. Por ejemplo, una media simple promedio de las áreas de igual ponderación en cada valor en el período ponderado y exponencial mancha mucho más énfasis en los valores actuales en el período de un triángulo promedio móvil áreas mayor énfasis en la sección media del período de tiempo y una variable de media móvil ajusta La ponderación dependiendo de la volatilidad en el período.


Lo anterior no tiene la intención de ser autoritario, sino simplemente mostrar que el término promedio de operación & # 82221; También se emplea con frecuencia para implicar el promedio móvil. Estoy seguro de que existen tantos ejemplos en los que el promedio operativo indica un promedio cumulativo. Pero para esta causa, considero que un término mucho más apropiado es & # 8220; promedio móvil cummulativo & # 8221; Así que he ido con eso.


Qué tiende a hacer la EMA supuestamente superior a un simple promedio móvil (SMA)? Lo que se cree detrás de la EMA tiende a tener sentido: las líneas SMA responden a las modificaciones en la tendencia de forma gradual. Para los comerciantes activos que se basan en esta herramienta fundamental, esto indica desencadenantes rezagados y pérdidas de posibilidades comerciales. La fórmula del promedio móvil exponencial responde significativamente más rápidamente y ayuda a los comerciantes activos a responder a las modificaciones de tendencia con mayor agilidad.


Los promedios móviles son útiles en cada análisis a corto plazo y largo plazo. A pesar de que se emplea un análisis de plazos rápidos para medir o suavizar las tendencias a corto plazo, se emplean promedios más largos para medir o suavizar las tendencias a largo plazo.


La fórmula anterior especifica que el valor de cierre debe estar por encima de un promedio móvil simple de 15 periodos (denotado por & # 8216; C


Filtro de promedio móvil


Este ejemplo muestra cómo suavizar los datos en count. dat utilizando un filtro de media móvil para ver el flujo de tráfico promedio en una ventana de 4 horas (que cubre la hora actual y las 3 horas anteriores). Esto se representa mediante la siguiente ecuación de diferencias:


Cree los vectores correspondientes.


Importe los datos de count. dat utilizando la función de carga.


Al cargar estos datos se crea una matriz de 24 by-3 llamada count en el espacio de trabajo de MATLAB®.


Extraer la primera columna de recuento y asignarla al vector x.


Calcular el promedio móvil de 4 horas de los datos.


Trazar los datos originales y los datos filtrados.


Los datos filtrados, representados por la línea continua en la gráfica, son el promedio móvil de 4 horas de los datos de recuento. Los datos originales están representados por la línea de puntos.


MATLAB y Simulink son marcas comerciales registradas de The MathWorks, Inc. Consulte www. mathworks. com/trademarks para obtener una lista de otras marcas comerciales propiedad de The MathWorks, Inc. Otros nombres de productos o marcas son marcas comerciales o marcas registradas de sus respectivos propietarios.


Selecciona tu pais


Los promedios móviles adaptativos conducen a mejores resultados?


Los promedios móviles son una herramienta favorita de los comerciantes activos. Sin embargo, cuando los mercados se consolidan, este indicador conduce a numerosos oficios whipsaw, resultando en una frustrante serie de pequeñas victorias y pérdidas. Los analistas han pasado décadas tratando de mejorar el promedio móvil simple. En este artículo, miramos estos esfuerzos y encontramos que su búsqueda ha llevado a útiles herramientas comerciales. Pros y contras de los promedios móviles Las ventajas y desventajas de los promedios móviles fueron resumidos por Robert Edwards y John Magee en la primera edición de Technical Analysis of Tendencias de Stock. Cuando dijeron "y fue en 1941 cuando descubrimos con alegría (aunque muchos otros lo habían hecho antes) que al promediar los datos durante un número determinado de días ... se podría derivar una especie de línea de tendencia automatizada que definitivamente interpretaría Los cambios de tendencia ... Parecía casi demasiado bueno para ser cierto. De hecho, era demasiado bueno para ser verdad ".


Con las desventajas superando las ventajas, Edwards y Magee abandonaron rápidamente su sueño de negociar de un bungalow de la playa. Pero 60 años después de que escribieron esas palabras, otros persisten en tratar de encontrar una herramienta sencilla que sin esfuerzo entregar las riquezas de los mercados.


Promedios móviles sencillos Para calcular una media móvil simple. Agregar los precios para el período de tiempo deseado y dividir por el número de períodos seleccionados. Encontrar un promedio móvil de cinco días requeriría sumar los cinco precios de cierre más recientes y dividir por cinco.


Si el cierre más reciente está por encima de la media móvil, se considerará que la acción está en una tendencia alcista.


Las tendencias de baja se definen por los precios que operan por debajo de la media móvil. (Para obtener más información, consulte nuestro tutorial de Medias móviles.)


Esta propiedad que define la tendencia hace posible que las medias móviles generen señales comerciales. En su aplicación más simple, los comerciantes compran cuando los precios se mueven por encima de la media móvil y se venden cuando los precios cruzan por debajo de esa línea. Un enfoque como este se garantiza para poner al comerciante en el lado derecho de cada comercio significativo. Desafortunadamente, al alisar los datos, los promedios móviles se quedarán a la zaga de la acción del mercado y el comerciante casi siempre devolverá una gran parte de sus ganancias incluso a las mayores operaciones ganadoras.


Promedios móviles exponenciales Los analistas parecen gustar la idea de la media móvil y han pasado años tratando de reducir los problemas asociados con este rezago. Una de estas innovaciones es el promedio móvil exponencial (EMA). Este enfoque asigna una ponderación relativamente más alta a los datos recientes, y como resultado se mantiene más cerca de la acción del precio que un simple promedio móvil. La fórmula para calcular una media móvil exponencial es:


EMA = (Peso * Cerrar) + ((1 peso) * EMAy) Donde:


El peso es la constante de suavizado seleccionada por el analista


EMAy es la media móvil exponencial de ayer


Un valor de ponderación común es de 0.181, que es cercano a una media móvil simple de 20 días. Otro es 0.10, que es aproximadamente una media móvil de 10 días.


A pesar de que reduce el retraso, el promedio móvil exponencial no aborda otro problema con los promedios móviles, que es que su uso para las señales comerciales dará lugar a un gran número de operaciones perdidas. En Nuevos Conceptos en Sistemas Técnicos de Negociación. Welles Wilder estima que los mercados sólo tienden una cuarta parte del tiempo. Hasta el 75% de la acción comercial se limita a rangos estrechos, cuando las señales de compra-venta promedio móvil se generarán repetidamente a medida que los precios se mueven rápidamente por encima y por debajo de la media móvil. Para abordar este problema, varios analistas han sugerido variar el factor de ponderación del cálculo EMA. (Para obtener más información, consulte Cómo se utilizan las medias móviles en el comercio?)


Adapting Moving Averages to Market Action One method of addressing the disadvantages of moving averages is to multiply the weighting factor by a volatility ratio. Doing this would mean that the moving average would be further from the current price in volatile markets. This would allow winners to run. As a trend comes to an end and prices consolidate. the moving average would move closer to the current market action and, in theory, allow the trader to keep most of the gains captured during the trend. In practice, the volatility ratio can be an indicator such as the Bollinger Band®width, which measures the distance between the well-known Bollinger Bands®. (For more on this indicator, see The Basics Of Bollinger Bands® .)


Perry Kaufman suggested replacing the "weight" variable in the EMA formula with a constant based on the efficiency ratio (ER) in his book, New Trading Systems and Methods . This indicator is designed to measure the strength of a trend, defined within a range from -1.0 to +1.0. It is calculated with a simple formula:


ER = (total price change for period) / (sum of absolute price changes for each bar )


Consider a stock that has a five-point range each day, and at the end of five days has gained a total of 15 points. This would result in an ER of 0.67 (15 points upward movement divided by the total 25-point range). Had this stock declined 15 points, the ER would be -0.67. (For more trading advice from Perry Kaufman, read Losing To Win . which outlines strategies for coping with trading losses.)


The principle of a trend's efficiency is based on how much directional movement (or trend) you get per unit of price movement over a defined time period. An ER of +1.0 indicates that the stock is in a perfect uptrend; -1.0 represents a perfect downtrend. In practical terms, the extremes are rarely reached.


To apply this indicator to find the adaptive moving average (AMA), traders will need to calculate the weight with the following, rather complex, formula:


C = [(ER * (SCF – SCS)) + SCS] 2 Where:


SCF is the exponential constant for the fastest EMA allowable (usually 2)


SCS is the exponential constant for the slowest EMA allowable (often 30)


ER is the efficiency ratio that was noted above


The value for C is then used in the EMA formula instead of the simpler weight variable. Although difficult to calculate by hand, the adaptive moving average is included as an option in almost all trading software packages. (For more on the EMA, read Exploring The Exponentially Weighted Moving Average .)


Examples of a simple moving average (red line), an exponential moving average (blue line) and the adaptive moving average (green line) are shown in Figure 1.


Figure 1: The AMA is in green and shows the greatest degree of flattening in the range-bound action seen on the right side of this chart. In most cases, the exponential moving average, shown as the blue line, is closest to the price action. The simple moving average is shown as the red line.


The three moving averages shown in the figure are all prone to whipsaw trades at various times. This drawback to moving averages has thus far been impossible to eliminate.


Conclusion Robert Colby tested hundreds of technical-analysis tools in The Encyclopedia of Technical Market Indicators . He concluded, "Although the adaptive moving average is an interesting newer idea with considerable intellectual appeal, our preliminary tests fail to show any real practical advantage to this more complex trend smoothing method." This doesn't mean traders should ignore the idea. The AMA could be combined with other indicators to develop a profitable trading system. (For more on this topic, read Discovering Keltner Channels And The Chaikin Oscillator .)


The ER can be used as a stand-alone trend indicator to spot the most profitable trading opportunities. As one example, ratios above 0.30 indicate strong uptrends and represent potential buys. Alternatively, since volatility moves in cycles, the stocks with the lowest efficiency ratio might be watched as breakout opportunities.


A stop-loss order, or stop order, is a type of advanced trade order that can be placed with most brokerage houses. The order. Leer respuesta completa >>


El retroceso de Fibonacci es una herramienta muy popular entre los comerciantes técnicos y se basa en los números clave identificados por el matemático. Leer respuesta completa >>


The doji candlestick is important enough that Steve Nison devotes an entire chapter to it in his definitive work on candlestick. Leer respuesta completa >>


The exhausted selling model is a pricing strategy used to identify and trade based off of the price floor of a security. Leer respuesta completa >>


Count analysis is a means of interpreting point and figure charts to measure vertical price movements. Technical analysts. Leer respuesta completa >>


The common assumptions made when doing a t-test include those regarding the scale of measurement, random sampling, normality. Leer respuesta completa >>


Una relación de deuda y rentabilidad utilizada para determinar la facilidad con que una empresa puede pagar intereses sobre la deuda pendiente.


Una cuenta que se puede encontrar en la parte de activos del balance de una empresa. La buena voluntad a menudo puede surgir cuando una empresa.


Un fondo de índice es un tipo de fondo mutuo con una cartera construida para igualar o rastrear los componentes de un índice de mercado, tales.


Un contrato de derivados mediante el cual dos partes intercambian instrumentos financieros. Estos instrumentos pueden ser casi cualquier cosa.


Aprenda lo que es EBITDA, vea un video corto para aprender más y con lecturas le enseñamos cómo calcularlo usando MS.


Exercises in MATLAB for convolution and Fourier transform


Our second filter weights the signal proportional to 1/s:


Our final filter is a digital version of what you will be making out of small electronic parts. It has the characteristic that the amplitude is proportional to 1/(1+tau/s):


The filters go from 0 to max positive, then from max negative to zero (because of the output order in the fft). It is just as well to look at only the first half of the filter. Having done so, we can look at these filters with a logarithmic x axis, which is a little easier to see. The third command just limits the range on the Y axis:


Compare the results. Can you hear the difference?:


Filtering by moving average


Here are some other exercises to think about in order to cement this knowledge: 1. Apply a moving average filter to the Fourier transform of pure noise, and observe what happens to its inverse transform (to time domain). What do you expect?


Published with MATLAB® 7.7


Filtering noise out of sensor data is an important first step while working with any real-time system. Here we use MATLAB to filter noise out of 3-axis accelerometer data in real-time. Both Exponential Moving Average (EMA, low pass, Infinite Impulse Response - IIR) and Simple Moving Average (SMA, Finite Impulse Response - FIR) filters are shown. The accelerometer is connected to Matlab using the Arduino UNO board.


Threshold Crossing


Often times, one needs to detect when a sensor signal crosses a certain threshold level. Here we use MATLAB to detect threshold crossing for 3-axis accelerometer data. Using the 3-case approach described in this video, one can use detect threshold crossing for any sensor signal. One can use raw or filtered sensor signal. The accelerometer is connected to Matlab using the Arduino UNO board.


&dupdo; 2016 Pramod Abichandani


Documentation


M = mean( A ) returns the mean of the elements of A along the first array dimension whose size does not equal 1.


If A is a vector, then mean(A) returns the mean of the elements.


If A is a matrix, then mean(A) returns a row vector containing the mean of each column.


If A is a multidimensional array, then mean(A) operates along the first array dimension whose size does not equal 1, treating the elements as vectors. This dimension becomes 1 while the sizes of all other dimensions remain the same.


M = mean( A , dim ) returns the mean along dimension dim. For example, if A is a matrix, then mean(A,2) is a column vector containing the mean of each row.


M = mean( ___ , outtype ) returns the mean with a specified data type, using any of the input arguments in the previous syntaxes. outtype can be 'default'. 'double'. or 'native' .


M = mean( ___ , nanflag ) specifies whether to include or omit NaN values from the calculation for any of the previous syntaxes. mean(A,'includenan') includes all NaN values in the calculation while mean(A,'omitnan') ignores them.


Selecciona tu pais


Download movAv. m (see also movAv2 - an updated version allowing weighting)


Description Matlab includes functions called movavg and tsmovavg ("time-series moving average") in the Financial Toolbox, movAv is designed to replicate the basic functionality of these. The code here provides a nice example of managing indexes inside loops, which can be confusing to begin with. I've deliberately kept the code short and simple to keep this process clear.


movAv performs a simple moving average that can be used to recover noisy data in some situations. It works by taking an the mean of the input ( y ) over a sliding time window, the size of which is specified by n . The larger n is, the greater the amount of smoothing; the effect of n is relative to the length of the input vector y . and effectively (well, sort of) creates a lowpass frequency filter - see the examples and considerations section.


Because the amount of smoothing provided by each value of n is relative to the length of the input vector, it's always worth testing different values to see what's appropriate. Remember also that n points are lost on each average; if n is 100, the first 99 points of the input vector don't contain enough data for a 100pt average. This can be avoided somewhat by stacking averages, for example, the code and graph below compare a number of different length window averages. Notice how smooth 10+10pt is compared to a single 20pt average. In both cases 20 points of data are lost in total.


% Create xaxis x=1:0.01:5; % Generate noise noiseReps = 4; noise = repmat(randn(1,ceil(numel(x)/noiseReps)),noiseReps,1); noise = reshape(noise, 1, length(noise)*noiseReps); % Generate ydata + noise y=exp(x)+10*noise(1:length(x)); % Perfrom averages: y2 = movAv(y, 10); % 10 pt y3 = movAv(y2, 10); % 10+10 pt y4 = movAv(y, 20); % 20 pt y5 = movAv(y, 40); % 40 pt y6 = movAv(y, 100); % 100 pt % Plot figure plot(x,[y', y2', y3', y4', y5', y6']) legend('Raw data', '10pt moving average', '10+10pt', '20pt', '40pt', '100pt') xlabel('x'); ylabel('y'); title('Comparison of moving averages')


movAv. m code run-through function output = movAv(y, n) The first line defines the function's name, inputs and outputs. The input x should be a vector of data to perform the average on, n should be the number of points to perform the average over output will contain the averaged data returned by the function. % Preallocate output output=NaN(1,numel(y)); % Find mid point of n midPoint = round(n/2); The main work of the function is done in the for loop, but before starting two things are prepared. Firstly the output is pre-allocated as NaNs, this served two purposes. Firstly preallocation is generally good practice as it reduces the memory juggling Matlab has to do, secondly, it makes it very easy to place the averaged data into an output the same size as the input vector. This means the same xaxis can be used later for both, which is convenient for plotting, alternatively the NaNs can be removed later in one line of code ( output = output(


The variable midPoint will be used to align the data in the output vector. If n =10, 10 points will be lost because, for the first 9 points of the input vector, there isn't enough data to take a 10 point average. As the output will be shorter than the input, it needs to be aligned properly. midPoint will be used so an equal amount of data is lost at the start and end, and the input is kept aligned with the output by the NaN buffers created when preallocating output .


for a = 1:length(y)-n % Find index range to take average over (a:b) b=a+n; % Calculate mean output(a+midPoint) = mean(y(a:b)); end In the for loop itself, a mean is taken over each consecutive segment of the input. The loop will run for a . which is defined as 1 up to the length of the input ( y ), minus the data that will be lost ( n ). If the input is 100 points long and n is 10, the loop will run from ( a =) 1 to 90.


This means a provides the first index of the segment to be averaged. The second index ( b ) is simply a+n-1 . So on the first iteration, a=1 . n=10 . so b = 11-1 = 10 . The first average is taken over y(a:b) . or x(1:10) . The average of this segment, which is a single value, is stored in output at index a+midPoint . or 1+5=6.


On the second iteration, a=2 . b= 2+10-1 = 11 . so the mean is taken over x(2:11) and stored in output(7) . On the last iteration of the loop for an input of length 100, a=91 . b = 90+10-1 = 100 so the mean is taken over x(91:100) and stored in output(95) . This leaves output with a total of n (10) NaN values at index (1:5) and (96:100).


Examples and considerations Moving averages are useful in some situations, but they're not always the best choice. Here are two examples where they're not necessarily optimal.


Microphone calibration This set of data represents the levels of each frequency produced by a speaker and recorded by a microphone with a known linear response. The output of the speaker varies with frequency, but we can correct for this variation with the calibration data - the output can be adjusted in level to account for the fluctuations in the calibration.


Notice that the raw data is noisy - this means that a small change in frequency appears to require a large, erratic, change in level to account for. Is this realistic? Or is this a product of the recording environment? It's reasonable in this case to apply a moving average that smooths out the level/frequency curve to provide a calibration curve that is slightly less erratic. But why isn't this optimal in this example?


More data would be better - multiple calibrations runs averaged together would destroy the noise in the system (as long as it's random) and provide a curve with less subtle detail lost. The moving average can only approximate this, and may remove some higher frequency dips and peaks from the curve that truly do exist.


Sine waves Using a moving average on sine waves highlights two points:


The general issue of choosing a reasonable number of points to perform the average over.


It's simple, but there are more effective methods of signal analysis than averaging oscillating signals in the time domain.


In this graph, the original sine wave is plotted in blue. Noise is added and plotted as the orange curve. A moving average is performed at different numbers of points to see if the original wave can be recovered. 5 and 10 points provide reasonable results, but don't remove the noise entirely, where as greater numbers of points start to lose amplitude detail as the average extends over different phases (remember the wave oscilates around zero, and mean([-1 1]) = 0).


An alternative approach would be to construct a lowpass filter than can be applied to the signal in the frequency domain. I'm not going to go into detail as it goes beyond the scope of this article, but as the noise is considerably higher frequency than the waves fundamental frequency, it would be fairly easy in this case to construct a lowpass filter than will remove the high frequency noise.


Documentation


Moving Average Model


MA( q ) Model


The moving average (MA) model captures serial autocorrelation in a time series y t by expressing the conditional mean of y t as a function of past innovations, ε t − 1. ε t − 2. …. ε t − q. An MA model that depends on q past innovations is called an MA model of degree q . denoted by MA( q ).


The form of the MA( q ) model in Econometrics Toolbox™ es


y t = c + ε t + θ 1 ε t − 1 + … + θ q ε t − q.


where ε t is an uncorrelated innovation process with mean zero. For an MA process, the unconditional mean of y t is μ = c .


In lag operator polynomial notation, L i y t = y t − yo. Define the degree q MA lag operator polynomial θ ( L ) = ( 1 + θ 1 L + … + θ q L q ). You can write the MA( q ) model as


y t = μ + θ ( L ) ε T


Invertibility of the MA Model


By Wold's decomposition [1]. an MA( q ) process is always stationary because θ ( L ) is a finite-degree polynomial.


For a given process, however, there is no unique MA polynomial—there is always a noninvertible and invertible solution [2]. For uniqueness, it is conventional to impose invertibility constraints on the MA polynomial. Practically speaking, choosing the invertible solution implies the process is causal . An invertible MA process can be expressed as an infinite-degree AR process, meaning only past events (not future events) predict current events. The MA operator polynomial θ ( L ) is invertible if all its roots lie outside the unit circle.


Econometrics Toolbox enforces invertibility of the MA polynomial. When you specify an MA model using arima. you get an error if you enter coefficients that do not correspond to an invertible polynomial. Similarly, estimate imposes invertibility constraints during estimation.


Referencias


[1] Wold, H. A Study in the Analysis of Stationary Time Series . Uppsala, Sweden: Almqvist & Wiksell, 1938.


Selecciona tu pais


Much of my research focuses on the dynamic relationships between assets in the market (#1,#2,#3). В Typically, I use correlation as a measure of relationship dependence since its results are easy to communicate and understand (as opposed to mutual information. which is somewhat less used in finance than it is in information theory). В However, analyzing the dynamics of correlation require us to calculate a moving correlation (a. k.a. windowed, trailing, or rolling).


Moving averages are well-understood and easily calculated – they take into account one asset at a time and produce one value for each time period. В Moving correlations, unlike moving averages, В must take into account multiple assets and produce a matrix of values for each time period. В In the simplest case, we care about the correlation between two assets – for example, the S&P 500 В (SPY) and the financial sector (XLF). В In this case, we need only pay attention to one value in the matrix. В However, if we were to add the energy sector (XLE), it becomes more difficult to efficiently calculate and represent these correlations. В This is always true for 3 or more different assets.


I’ve written the code below to simplify this process (download ). В First, you provide a matrix ( dataMatrix ) with variables in the columns – for example, SPY in column 1, XLF in column 2, and XLE in column 3. В Second, you provide a window size ( windowSize ). В For example, if dataMatrix containedВ minutely returns, then a window size of 60 would produce trailing hourly correlation estimates. В Third, you indicate which column ( indexColumn ) you care about seeing the results for. В In our example, we would likely specify column 1, since this would allow us to observe the correlation between (1) the S&P and financial sector and (2) the S&P and energy sector.


The image below shows the results for exactly the example above for last Friday, October 1st, 2010.


Share/Bookmark


2 Responses to “Calculating Moving Correlation in Matlab”


it’s not clear how you deal with NA.


How would you calculate correlations for indexes across different countries where one data point can be missing due to a particular holiday in a single country?


Hi Paolo, The code as I’ve posted doesn’t deal with NaNs gracefully. You can see from this Matlab documentation page that you can add “‘rows’, ‘complete’” to the corrcoef command to gracefully deal with the issue. http://www. mathworks. com/help/techdoc/ref/corrcoef. html


The other alternatives are to drop that date completely, interpolate, or use a more sophisticated method for dealing with missing observations.


Deja un comentario Cancelar respuesta


Documentation


Autoregressive Moving Average Model


ARMA( p , q ) Model


For some observed time series, a very high-order AR or MA model is needed to model the underlying process well. In this case, a combined autoregressive moving average (ARMA) model can sometimes be a more parsimonious choice.


An ARMA model expresses the conditional mean of y t as a function of both past observations, y t − 1. …. y t − pag. and past innovations, ε t − 1. …. ε t − q. The number of past observations that y t depends on, p . is the AR degree. The number of past innovations that y t depends on, q . is the MA degree. In general, these models are denoted by ARMA( p , q ).


The form of the ARMA( p , q ) model in Econometrics Toolbox™ es


y t = c + ϕ 1 y t − 1 + … + ϕ p y t − p + ε t + θ 1 ε t − 1 + … + θ q ε t − q.


where ε t is an uncorrelated innovation process with mean zero.


In lag operator polynomial notation, L i y t = y t − yo. Define the degree p AR lag operator polynomial ϕ ( L ) = ( 1 − ϕ 1 L − … − ϕ p L p ). Define the degree q MA lag operator polynomial θ ( L ) = ( 1 + θ 1 L + … + θ q L q ). You can write the ARMA( p , q ) model as


ϕ ( L ) y t = c + θ ( L ) ε T


The signs of the coefficients in the AR lag operator polynomial, ϕ ( L ). are opposite to the right side of Equation 5-10. When specifying and interpreting AR coefficients in Econometrics Toolbox, use the form in Equation 5-10 .


Stationarity and Invertibility of the ARMA Model


Consider the ARMA( p , q ) model in lag operator notation,


ϕ ( L ) y t = c + θ ( L ) ε T


From this expression, you can see that


is the unconditional mean of the process, and ψ ( L ) is a rational, infinite-degree lag operator polynomial, ( 1 + ψ 1 L + ψ 2 L 2 + … ).


Note: The Constant property of an arima model object corresponds to c . and not the unconditional mean μ .


By Wold's decomposition [1]. Equation 5-12 corresponds to a stationary stochastic process provided the coefficients ψ i are absolutely summable. This is the case when the AR polynomial, ϕ ( L ). is stable . meaning all its roots lie outside the unit circle. Additionally, the process is causal provided the MA polynomial is invertible . meaning all its roots lie outside the unit circle.


Econometrics Toolbox enforces stability and invertibility of ARMA processes. When you specify an ARMA model using arima. you get an error if you enter coefficients that do not correspond to a stable AR polynomial or invertible MA polynomial. Similarly, estimate imposes stationarity and invertibility constraints during estimation.


Referencias


[1] Wold, H. A Study in the Analysis of Stationary Time Series . Uppsala, Sweden: Almqvist & Wiksell, 1938.


Selecciona tu pais


output = tsmovavg(tsobj, 's', lead, lag) and output = tsmovavg(vector, 's', lead, lag, dim) compute the simple moving average. lead and lag indicate the number of previous and following data points used in conjunction with the current data point when calculating the moving average. For example, if you want to calculate a five-day moving average, with the current data in the middle, you set both lead and lag to 2 (2 + 1 + 2 = 5).


output = tsmovavg(tsobj, 'e', timeperiod) and output = tsmovavg(vector, 'e', timeperiod, dim) compute the exponential weighted moving average. The exponential moving average is a weighted moving average with the assigned weights decreasing exponentially as you go further into the past. If is a smoothing constant, the most recent value of the time series is weighted by , the next most recent value is weighted by (1- ), the next value by (1- ) 2. and so forth. Here, is calculated using 2/( timeperiod +1), or 2/(Windows_size+1).


output = tsmovavg(tsobj, 't', numperiod) and output = tsmovavg(vector, 't', numperiod, dim) compute the triangular moving average. The triangular moving average double smooths the data.


tsmovavg calculates the first simple moving average with window width of numperiod/2. If numperiod is an odd number, it rounds up ( numperiod/2 ) and uses it to calculate both the first and the second moving average. The second moving average a simple moving average of the first moving average. If numperiod is an even number, tsmovavg calculates the first moving average using width ( numperiod/2 ) and the second moving average using width (numperiod/2)+1 .


output = tsmovavg(tsobj, 'w', weights, pivot) and output = tsmovavg(vector, 'w', weights, pivot, dim) calculate the moving average by supplying weights for each element in the moving window. The length of the weight vector determines the size of the window. For example, if weights = [1 1 1 1 1] and pivot = 3. tsmovavg calculates a simple moving average by averaging the current value with the two previous and two following values.


output = tsmovavg(tsobj, 'm', numperiod) and output = tsmovavg(vector, 'm', numperiod, dim) calculate the modified moving average. The first moving average value is calculated by averaging the past numperiod inputs. The rest of the moving average values are calculated by adding to the previous moving average value the current data point divided by numperiod and subtracting the previous moving average divided by numperiod. Moving average values prior to numperiod - th value are copies of the data values.


Achelis, Steven B. Technical Analysis From A To Z . Second Printing, McGraw-Hill, 1995, pg. 184-192


What is smoothing and how can I do it?


I have an array in Matlab which is the magnitude spectrum of a speech signal (the magnitude of 128 points of FFT). How do I smooth this using a moving average? From what I understand, I should take a window size of a certain number of elements, take average, and this becomes the new 1st element. Then shift the window to the right by one element, take average which becomes the 2nd element, and so on. Is that really how it works? I am not sure myself since if I do that, in my final result I will have less than 128 elements. So how does it work and how does it help to smooth the data points? Or is there any other way I can do smoothing of data?


asked Oct 15 '12 at 6:30


migrated from stackoverflow. com Oct 15 '12 at 14:51


This question came from our site for professional and enthusiast programmers.


for a spectrum you probably want to average together (in the time dimension) multiple spectra rather than a running average along the frequency axis of a single spectrum – endolith Oct 16 '12 at 1:04


@endolith both are valid techniques. Averaging in the frequency domain (sometimes called a Danielle Periodogram) is the same as windowing in the time domain. Averaging multiple periodograms ("spectra") is an attempt to mimic the ensemble averaging required of the true Periodogram (this is called the Welch Periodogram). Also, as a matter of semantics, I would argue that "smoothing" is non-causual low-pass filtering. See Kalman filtering vs Kalman smoothing, Wiener filtering v Wiener smoothing, etc. There is a nontrivial distinction and it's implementation dependent. & Ndash; Bryan Dec 12 '12 at 19:18


Smoothing can be done in many ways, but in very basic and general terms it means that you even out a signal, by mixing its elements with their neighbors. You smear/blur the signal a bit in order to get rid of noise. For example, a very simple smoothing technique would be, to recalculate every signal element f(t) to as 0.8 of the original value, plus 0.1 of each of its neighbors:


Note how the multiplication factors, or weights, add up to one. So if the signal is fairly constant, smoothing doesn't change it much. But if the signal contained a sudden jerky change, then the contribution from its neighbors will help to clear up that noise a bit.


The weights you use in this recalculation function can be called a kernel. A one-dimensional Gaussian function or any other basic kernel should do in your case.


Nice example of one particular kind of smoothing:


Above: unsmoothed signal Below: smoothed signal


Examples of a few kernels:


answered Oct 15 '12 at 6:36


so is this a weighted moving average? Is this called having a window size of 3? What about the 1st and the last element? And how would this be modified if I have an array of 128 elements and I want to use a window of 16 or 32 elements? & Ndash; user13267 Oct 15 '12 at 6:54


@user13267: Yes, you could say a smoothing kernel is a weighted moving average. If you use a uniform kernel (see second image), it's just a plain moving average. You're right about window size. For dealing with the edges, there are three basic approaches: 1) zero-padding your data, 2) repeating the last value, 3) mirroring the signal. In all cases you make some pretend data so that your kernel doesn't fall off into nothingness. & Ndash; Junuxx Oct 15 '12 at 7:00


wouldn't zero padding count as falling into nothingness? At the end of the moving average process my new "averaged" data set should have the same number of data as the original shouldn't it? then if I zero pad it at the beginning or the end, or repeat the last data, won't it bias the average value at the edges of the array? And how would mirroring the signal help in terms of number of data terms? Is there any simple tutorial for this anywhere which shows how the process takes place for, say, 32 data points and a window size of 4 or 5? & Ndash; user13267 Oct 15 '12 at 8:21


If you want your smoothed dataset to have the same length as the original dataset, you have to "make up" data at the endpoints. Any choice you make for how to create that data biases the average in some way. Treating the out-of-bounds data as a mirror of the real dataset (i. e. assuming that sample N+1 is the same as N-1, N+2 = N-2, etc.) will retain the frequency spectrum characteristics of the end parts of the signal, whereas assuming a zero-or-non-zero repeat will make it appear that all frequencies are rolling off at the ends. & Ndash; Russell Borogove Oct 15 '12 at 18:17


In addition to the nice answer of Junuxx I would like to drop a few notes.


Smoothing is related to filtering (unfortunately quite vague Wikipedia article ) - you should pick the smoother based on it's properties.


One of my favorites is the median filter. This is an example of a non-linear filter. It has some interesting properties, it preserves "edges" and is quite robust under large noise.


If you have a model how your signal behaves a Kalman filter is worth a look. Its smoothing is actually a Bayesian maximum likelihood estimation of the signal based on observations.


Smoothing implies using information from neighboring samples in order to change the relationship between neighboring samples. For finite vectors, at the ends, there is no neighboring information to one side. Your choices are: don't smooth/filter the ends, accept a shorter resulting smoothed vector, make up data and smooth with that (depends on the accuracy/usefulness of any predictions off the ends), or maybe using different asymmetric smoothing kernels at the ends (which ends up shortening the information content in the signal anyway).


answered Oct 15 '12 at 19:44


Others have mentioned how you do smoothing, I'd like to mention why smoothing works.


If you properly oversample your signal, it will vary relatively little from one sample to the next (sample = timepoints, pixels, etc), and it is expected to have an overall smooth appearance. In other words, your signal contains few high frequencies, i. e. signal components that vary at a rate similar to your sampling rate.


Yet, measurements are often corrupted by noise. In a first approximation, we usually consider the noise to follow a Gaussian distribution with mean zero and a certain standard deviation that is simply added on top of the signal.


To reduce noise in our signal, we commonly make the following four assumptions: noise is random, is not correlated among samples, has a mean of zero, and the signal is sufficiently oversampled. With these assumptions, we can use a sliding average filter.


Consider, for example, three consecutive samples. Since the signal is highly oversampled, the underlying signal can be considered to change linearly, which means that the average of the signal across the three samples would equal the true signal at the middle sample. In contrast, the noise has mean zero and is uncorrelated, which means that its average should tend to zero. Thus, we can apply a three-sample sliding average filter, where we replace each sample with the average between itself and its two adjacent neighbors.


Of course, the larger we make the window, the more the noise will average out to zero, but the less our assumption of linearity of the true signal holds. Thus, we have to make a trade-off. One way to attempt to get the best of both worlds is to use a weighted average, where we give farther away samples smaller weights, so that we average noise effects from larger ranges, while not weighting true signal too much where it deviates from our linearity assumption.


How you should put the weights depends on the noise, the signal, and computational efficiency, and, of course, the trade-off between getting rid of the noise and cutting into the signal.


Note that there has been a lot of work done in the last few years to allow us to relax some of the four assumptions, for example by designing smoothing schemes with variable filter windows (anisotropic diffusion), or schemes that don't really use windows at all (nonlocal means).


answered Dec 27 '12 at 15:10


Simple Example On Weighted Moving Average


This simple example shows how to find the dc offset embedded in a sinusoidal signal using Weighted Moving Average. The frequency and period of the sinusoidal input signal is 1 rad/s and 2*pi respectively. The sample time of the Zero-Order Hold block is 0.1s. The 'Weight' parameter in the Weighted Moving Average block is determined as follows: weight = ones(1,round(2*pi/0.1))/round(2*pi/0.1) Requirements: · MATLAB Release: R14SP3 · Simulink


Related Scripts


Gui Technical Analysis Tool Instructions:1. Give the symbol of the stock.2. Give today's date in the specific format (months-days-year).3. 'GET DATA' button fetches the data from.


Wsma The WSMA (Weighted/Simple Moving Average) is a kind of moving average much like the one proposed by Tillson (i. e. TillsonT3). Under the same concept b.


Lib_mysqludf_ta 0.01 The lib_mysqludf_ta MySQL UDF allows database administrators to run technical analysis operations right from the MySQL core. The extension includes fun.


Moving Averages MOVING will compute moving averages of order n (best taken as odd)Usage: y=moving(x, n)where x is the input vector to be smoothedn is number of points.


Weighted Selection 1.0.1 Weighted Selection is a simple library to produce weighted randomized results given a set of relative weights.


Technical Analysis For. net Provides a collection of technical indicators which can be used in the construction of technical trading systems. Moreover, by using these methods wit.


Tagadelic Tagadelic is a small Drupal module, without any databases, or configuration, that generates pages with weighted tags. Tagadelic is an out of the box, r.


Simple Average Calculation 1.0 This script allows you to run a simple program that can average a list of numbers.


Analytica 0.0.14 The library can be used for graph plotting, derivatives and for indicators [sum mean, moving average etc.].


Falcon's Moving Calculator Falcon's Moving Calculator is a web package for enabling web live help for moving companies, definitely improving interaction with customers and web s.


Mvaverage This is a very fast operation, smoothing a matrix with no NaNs via recursive moving average methods. Requirements:· MATLAB 7.4 or higher.


Fillnans FILLNANS replaces all NaNs in array using inverse-distance weighting. Y = FILLNANS(X) replaces all NaNs in the vector or array X by inverse-distance we.


Tillsont3 It calculates the Tillson moving average. The user is able to change the parameters such as the smoothing sweeps and the volume factor. Requirements:&.


Php Depend 1.1.0 JDepend is a package dependency analyzer that generates design quality metrics. JDepend can also be downloaded from here. PHP Depend performs stati.


Riskcalc Simple VaR Calculator provides: - Evaluation of return distribution of single asset or portfolio of assets; - Volatility forecasts using moving averag.


Thimblebench 0.1 The suite runs the same scripts on different servers and returns results in a sortable table for easy comparisons. Works with PHP 4.x and PHP 5.x the s.


Decision Analysis Decision Analysis is an easily-extensible expert system to help users make decisions of all types. Written entirely in Python, Decision Analysis, at t.


Average Latitude And Longitude For Us States It comes as a text file with data organized on three columns. The first column is the state ISO 3166-2 code abbreviation, the second is the average lat.


Moran's I PURPOSE: calculate local Moran's I for a local grid using a weight matrix. USAGE: m = moransI(grid, W, s);where: [grid] is the matrix to analyze[W] is.


Randomlib The class can be used to select one or a group of item(strings, objects, anything) from the entire collection of items. It include randomly select, ra.


Documentation


Moving Average Model


MA( q ) Model


The moving average (MA) model captures serial autocorrelation in a time series y t by expressing the conditional mean of y t as a function of past innovations, ε t − 1. ε t − 2. …. ε t − q. An MA model that depends on q past innovations is called an MA model of degree q . denoted by MA( q ).


The form of the MA( q ) model in Econometrics Toolbox™ es


y t = c + ε t + θ 1 ε t − 1 + … + θ q ε t − q.


where ε t is an uncorrelated innovation process with mean zero. For an MA process, the unconditional mean of y t is μ = c .


In lag operator polynomial notation, L i y t = y t − yo. Define the degree q MA lag operator polynomial θ ( L ) = ( 1 + θ 1 L + … + θ q L q ). You can write the MA( q ) model as


y t = μ + θ ( L ) ε T


Invertibility of the MA Model


By Wold's decomposition [1]. an MA( q ) process is always stationary because θ ( L ) is a finite-degree polynomial.


For a given process, however, there is no unique MA polynomial—there is always a noninvertible and invertible solution [2]. For uniqueness, it is conventional to impose invertibility constraints on the MA polynomial. Practically speaking, choosing the invertible solution implies the process is causal . An invertible MA process can be expressed as an infinite-degree AR process, meaning only past events (not future events) predict current events. The MA operator polynomial θ ( L ) is invertible if all its roots lie outside the unit circle.


Econometrics Toolbox enforces invertibility of the MA polynomial. When you specify an MA model using arima. you get an error if you enter coefficients that do not correspond to an invertible polynomial. Similarly, estimate imposes invertibility constraints during estimation.


Referencias


[1] Wold, H. A Study in the Analysis of Stationary Time Series . Uppsala, Sweden: Almqvist & Wiksell, 1938.


Selecciona tu pais


Double Moving Average Crossover


This is the 16th Day course in a series of 60-Days called “Technical Analysis Training”


You will get daily one series of this Training after 8 o’clock night (Dinner Finished)


Follow MoneyMunch. com Technical Analysis Directory and Learn Basic Education of Technical Analysis on the Indian Stock Market (NSE/BSE)


Double Moving Average Crossover


When a shorter and longer moving average (of a security’s price) cross one another (the event), a bullish or bearish signal is generated according to the way of the crossover.


A moving average is an indicator which performances the average worth of a security’s price around a period of time. The type of Technical Researching takes place when a shorter and longer moving average cross one another. The supported crossovers are 21 traversing 50 (a short term signal) and 50 traversing 200 (a long term signal).


A bullish signal is produced once the shorter moving average crosses above the longer moving average. A bearish alert is generated whenever shorter moving average crosses below the longer moving average.


These events are really based upon simple moving averages. A straight-forward moving average is one where equal weight is given to any single price around the calculation period. For example, a 21-day straight forward moving average is calculated by taking the sum of the last 21 days of the stock’s close price and then splitting by 21. Different types of moving averages, that are not supported in this case, are weighted averages and also exponentially smoothed averages.


Trading Factors Moving averages are really lagging indicators because they utilize historical information. Utilizing them because indicators cannot get you in during the bottom and additionally away at the top but could get you in and out somewhere amongst. The couple work ideal in trending price layouts, in which a strong uptrend or perhaps downtrend is strongly in place. Getting a crossover moving average because any signal is regarded as superior to the simple and easy moving average because there are two smoothed show of pricing which reduces the sheer number of false signals.


Factors that Supports


Indicators which are well suited to using moving averages include the MACD and also Momentum.


Main Behavior Moving averages excel in trending markets nonetheless they generate numerous fake signals in choppy, sideways markets.


Message for you(Trader/Investor): Google has the answers to most all of your questions, after exploring Google if you still have thoughts or questions my Email is open 24/7. Each week you will receive your Course Materials. You can print it and highlight for your Technical Analysis Training.


Technical Analysis Training (60 Days – Comprehensive Course)


Short-Term Chart Patterns (15 - Days)


Short-Term Chart Patterns: (7Days)


Reliability, Transparency and personal attention


Let me say this first "Outstanding and Profitable Nifty Trading Calls". Premarket Nifty supports and resistances often turn out key levels intraday, this levels help immensely. Due to Moneymunch Nifty insights, I am able to better time my stock tr.


Superior Services for day traders


I have been following your site since almost its launch in 2009, your unbiased advice and analysis is remarkable in its accuracy and timing. I continuously follow moneymunch. com and really impressed by there guidance.


Most reliable and safe hands


I am highly satisfied with the calls given by the moneymunch. the service is excellent. when compare to other service providers I feel moneymunch is one of the best according to me. i am very much happy.


Accurate calls at all


Mr. Guru and Mr. Dev, I have enjoyed my 4 months membership so much. It's cost me Rs.32,500 to belong and has cut 12 months off my target date for retirement, no kidding. Also I think the transparency and reliability of your administration and repor.


Profit making trades


If you are looking for truly independent advice then look no further than moneymunch. Mr. Dev is a master of MCX market and looking at price action, I have found him to always balance hid price analysis with an understanding of the fundamentals at.


Sigan con el buen trabajo.


Just wanted to say a big thank you for the information and education I have received in my first months membership. As a Financial Planner and active investor I have learned more in the last 30 days about charting than I had in the last 5 years. O.


Long Live Guruji


Guru, I must thank you for taking someone who knew nothing and educating me to the point where I have a second income.


Very happy with Paid service


I've used several forex services, and you guys are the best! Other providers I've used are not as helpful or knowledgeable, and don't get anywhere near the accuracy and profitably of your trading alerts. It's absolutely outstanding that you offer.


Really impressed by Mr. Dev


Thank you for a simple but workable SMS commodity alerts. I was about to abandon commodity trading with some huge losses over the last year, from one strategy to the next. and then I found your service. I Also enjoy your quick support - feel I h.


Impressive tips, very good


I am extremely satisfied with your paid subscription and consider you an excellent example of how stock & commodity service has become the average trader's best friend. Your service has always been great. Sigan con el buen trabajo.


Very Impressive calls


I was attracted to Moneymunch by the ability to set-and-forget trading orders. This provides me peace of mind, as I don't have the time (or desire) to seat in front of my computer all day. ¡Gracias!


Excellent service


Dear Dev & Guru, I have been trading the stock and commodity market for over 3 years now with my live account and have tried many systems. I have been using your trading service for a while now and. It's unbelievable. I have never seen suc.


Mathcad 15 Moving Average Tutorial


MATLAB stands for Matrix Laboratory. say it would be replaced with the required matrix. In MATLAB, unlike. average or mean value ; median. MAPLE TUTORIAL I. At the left of the. (You may have to do some juggling to highlight this entire area, perhaps starting at the bottom and moving up with the pointer.). calculate the average displacement and the mean square. we can imagine the quantum mechanical harmonic oscillator as moving back and forth and. 15:50. 28. the reason for taking this so slowly is that students often have trouble moving between. Instruments for Measuring Voltage In. like Mathcad, Matlab and. An example of how to calculate linear regression line using least squares. A step by step tutorial showing how to develop a linear regression. 15:00. Engineering With Mathcad Using Mathcad to Create and Organize Your Engineering Calculations This page intentionally left blank Engineering With Mathcad Using. In financial applications a simple moving average (SMA) is the unweighted mean of the previous n data. However, in science and engineering the mean is normally taken. Mathcad Tutorials. Mathcad Tutorial Chapter 1. Mathcad Tutorial Chapter 2. Mathcad Tutorial Chapter 3. Mathcad Tutorial Chapter 4. 15: 16: 17: 18: 19: 20: 21: 22. was too hard and complicated for the average user. The first challenge is moving data from a 2D. and thru social networking of Planet PTC Community. mathcad 15 moving average tutorial


Every student knows it's not easy to find a quality custom essay writing service. But now you can finally stop searching and wasting your time and money as you've come to the right place!


Why us?


We have developed the policy of client care. We know that the main thing for those who order papers on-line are guaranteed. Check the below list and make sure we protect your interests, your money and save your time!


We guarantee:


100% unique, plagiarism free paper. We offer a plagiarism report so you can make sure by yourself that papers we offer are completely original!


On-time delivery of your essay. We have lots of orders from students taking on-line courses, so we understand that the matter of deadline is the key to passing the course successfully. In case the non-delivery, a 100% of the money will be returned to you.


High-quality work by the custom essay writer specializing in this area. If you are not satisfied with the quality of your paper, we offer revisions free of charge . The paper will be revised until you're completely satisfied with it. If the writer does not meet your requirements, another writer will revise the paper.


100% protection of your personal data. We do not share or mention the information about our clients on-line. Our client base is confidential information available for our staff only. Each member of our team has signed the written non-disclosure agreement of the information about our clients. Make sure nobody will know that you appealed to us.


Like no other cheap essay services, we give you numerous opportunities to save!


It's you who states the final price of your order - yes, you! Our services are fairly priced as we hire only qualified professionals. But we understand your queries about the price as well, that's why you can change the price of your assignment and reduce it by 50%! If you think, I want someone to write my essay that wouldn't cost me a bomb - don't waste any second and contact us straightaway.


How do I save?


State longer deadline - our urgent order is more expensive than those with weeks of deadline. It's a good idea to place the order as soon as you've got the task from your professor :)


State your academic level. Papers of master's degree are more difficult, thus more expensive. No need to order them if you are a college student.


Ask for discount code. If it's your first time to our website, feel free to contact our live chat to get a discount code for your order.


Collect points. Click on earn points button and learn more about our extra discount program. Now sharing is not only caring, but also saving!


These are only a few ways to save on your essay. Contact our support team via live chat to know more!


What if I need essay editing service?


No problems at all - we also staff qualified editors. Send your paper and get it proofread, polished in writing style and free from mistakes!


Our college essay service has helped hundreds of students already. We will be glad to assist you with paper of any difficulty!


MetaTrader 5 - Indicators


Fractal Adaptive Moving Average (FrAMA) - indicator for MetaTrader 5


Descripción:


Fractal Adaptive Moving Average technical Indicator (FRAMA) was developed by John Ehlers.


This indicator is constructed based on the algorithm of the Exponential Moving Average. in which the smoothing factor is calculated based on the current fractal dimension of the price series. The advantage of FRAMA is the possibility to follow strong trend movements and to sufficiently slow down at the moments of price consolidation.


All types of analysis used for Moving Averages can be applied to this indicator.


Fractal Adaptive Moving Average Indicator


Cálculo:


FRAMA(i) = A(i) * Price(i) + (1 - A(i)) * FRAMA(i-1)


FRAMA(i) - current value of FRAMA;


Price(i) - current price;


FRAMA(i-1) - previous value of FRAMA;


A(i) - current factor of exponential smoothing.


Exponential smoothing factor is calculated according to the below formula:


A(i) = EXP(-4.6 * (D(i) - 1))


D(i) - current fractal dimension;


EXP() - mathematical function of exponent.


Fractal dimension of a straight line is equal to one. It is seen from the formula that if D = 1, then A = EXP(-4.6 *(1-1)) = EXP(0) = 1. Thus if price changes in straight lines, exponential smoothing is not used, because in such a case the formula looks like this:


FRAMA(i) = 1 * Price(i) + (1 - i) * FRAMA(i-1) = Price(i)


I. e. the indicator exactly follows the price.


The fractal dimension of a plane is equal to two. From the formula we get that if D = 2, then the smoothing factor A = EXP(-4.6*(2-1)) = EXP(-4.6) = 0.01. Such a small value of the exponential smoothing factor is obtained at moments when price makes a strong saw-toothed movement. Such a strong slow-down corresponds to approximately 200-period simple moving average.


Formula of fractal dimension:


D = (LOG(N1 + N2) - LOG(N3))/LOG(2)


It is calculated based on the additional formula:


N(Length, i) = (HighestPrice(i) - LowestPrice(i))/Length


HighestPrice(i) - current maximal value for Length periods;


LowestPrice(i) - current minimal value for Length periods;


Values N1, N2 and N3 are respectively equal to:


N1(i) = N(Length, i) N2(i) = N(Length, i + Length) N3(i) = N(2 * Length, i)


Smoothing


In many experiments in science, the true signal amplitudes (y-axis values) change rather smoothly as a function of the x-axis values, whereas many kinds of noise are seen as rapid, random changes in amplitude from point to point within the signal. In the latter situation it may be useful in some cases to attempt to reduce the noise by a process called smoothing . In smoothing, the data points of a signal are modified so that individual points that are higher than the immediately adjacent points (presumably because of noise) are reduced, and points that are lower than the adjacent points are increased. This naturally leads to a smoother signal (and a slower step response to signal changes) . As long as the true underlying signal is actually smooth, then the true signal will not be much distorted by smoothing, but the noise will be reduced. In terms of the frequency components of a signal, a smoothing operation acts as a low-pass filter. reducing the high-frequency components and passing the low-frequency components with little change.


Smoothing algorithms . Most smoothing algorithms are based on the " shift and multiply " technique, in which a group of adjacent points in the original data are multiplied point-by-point by a set of numbers (coefficients) that defines the smooth shape, the products are added up and divided by the sum of the coefficients, which becomes one point of smoothed data, then the set of coefficients is shifted one point down the original data and the process is repeated. The simplest smoothing algorithm is the rectangular boxcar or unweighted sliding-average smooth ; it simply replaces each point in the signal with the average of m adjacent points, where m is a positive integer called the smooth width . For example, for a 3-point smooth ( m = 3):


for j = 2 to n-1, where S j the j th point in the smoothed signal, Y j the j th point in the original signal, and n is the total number of points in the signal. Similar smooth operations can be constructed for any desired smooth width, m . Usually m is an odd number. If the noise in the data is "white noise" (that is, evenly distributed over all frequencies) and its standard deviation is s . then the standard deviation of the noise remaining in the signal after the first pass of an unweighted sliding-average smooth will be approximately s over the square root of m ( s /sqrt( m )), where m is the smooth width. Despite its simplicity, this smooth is actually optimum for the common problem of reducing white noise while keeping the sharpest step response.


The triangular smooth is like the rectangular smooth, above, except that it implements a weighted smoothing function. For a 5-point smooth ( m = 5):


for j = 3 to n-2, and similarly for other smooth widths (see the spreadsheet UnitGainSmooths. xls ). In both of these cases, the integer in the denominator is the sum of the coefficients in the numerator, which results in a “unit-gain” smooth that has no effect on the signal where it is a straight line and which preserves the area under peaks.


It is often useful to apply a smoothing operation more than once, that is, to smooth an already smoothed signal, in order to build longer and more complicated smooths. For example, the 5-point triangular smooth above is equivalent to two passes of a 3-point rectangular smooth. Three passes of a 3-point rectangular smooth result in a 7-point " pseudo-Gaussian " or haystack smooth, for which the coefficients are in the ratio 1:3:6:7:6:3:1. The general rule is that n passes of a w - width smooth results in a combined smooth width of n * w - n +1. For example, 3 passes of a 17-point smooth results in a 49-point smooth. These multi-pass smooths are more effective at reducing high-frequency noise in the signal than a rectangular smooth but exhibit slower step response. In all these smooths, the width of the smooth m is chosen to be an odd integer, so that the smooth coefficients are symmetrically balanced around the central point, which is important because it preserves the x-axis position of peaks and other features in the signal. (This is especially critical for analytical and spectroscopic applications because the peak positions are often important measurement objectives).


Note that we are assuming here that the x-axis intervals of the signal is uniform, that is, that the difference between the x-axis values of adjacent points is the same throughout the signal. This is also assumed in many of the other signal-processing techniques described in this essay, and it is a very common (but not necessary) characteristic of signals that are acquired by automated and computerized equipment.


The Savitzky-Golay smooth is based on the least-squares fitting of polynomials to segments of the data. The algorithm is discussed in http://www. wire. tu-bs. de/OLDWEB/mameyer/cmr/savgol. pdf. Compared to the sliding-average smooths, the Savitzky-Golay smooth is less effective at reducing noise, but more effective at retaining the shape of the original signal. It is capable of differentiation as well as smoothing. The algorithm is more complex and the computational times are greater than the smooth types discussed above, but with modern computers the difference is not significant and code in various languages is widely available online. See SmoothingComparison. html .


The shape of any smoothing algorithm can be determined by applying that smooth to a delta function . a signal consisting of all zeros except for one point, as demonstrated by the simple Matlab/Octave script DeltaTest. m . Noise reduction . Smoothing usually reduces the noise in a signal. If the noise is "white" (that is, evenly distributed over all frequencies) and its standard deviation is s . then the standard deviation of the noise remaining in the signal after one pass of a triangular smooth will be approximately s *0.8/sqrt( m ), where m is the smooth width. Smoothing operations can be applied more than once: that is, a previously-smoothed signal can be smoothed again. In some cases this can be useful if there is a great deal of high-frequency noise in the signal. However, the noise reduction for white noise is less in each successive smooth. For example, three passes of a rectangular smooth reduces white noise by a factor of approximately s *0.7/sqrt( m ), only a slight improvement over two passes.


The frequency distribution of noise, designated by noise color. substantially effects the ability of smoothing to reduce noise. The Matlab/Octave function “NoiseColorTest. m ” compares the effect of a 100-point boxcar (unweighted sliding average) smooth on the standard deviation of white, pink, and blue noise, all of which have an original unsmoothed standard deviation of 1.0. Because smoothing is a low-pass filter process, it effects low frequency (pink) noise less, and high-frequency (blue) noise more, than white noise.


End effects and the lost points problem. Note in the equations above that the 3-point rectangular smooth is defined only for j = 2 to n-1. There is not enough data in the signal to define a complete 3-point smooth for the first point in the signal (j = 1) or for the last point (j = n). because there are no data points before the first point or after the last point. (Similarly, a 5-point smooth is defined only for j = 3 to n-2, and therefore a smooth can not be calculated for the first two points or for the last two points). In general, for an m - width smooth, there will be ( m -1)/2 points at the beginning of the signal and ( m -1)/2 points at the end of the signal for which a complete m - width smooth can not be calculated. Qué hacer? There are two approaches. One is to accept the loss of points and trim off those points or replace them with zeros in the smooth signal. (That's the approach taken in most of the figures in this paper). The other approach is to use progressively smaller smooths at the ends of the signal, for example to use 2, 3, 5, 7. point smooths for signal points 1, 2, 3,and 4. and for points n, n-1, n-2, n-3. respectivamente. The later approach may be preferable if the edges of the signal contain critical information, but it increases execution time. The fastsmooth function discussed below can utilize either of these two methods.


Examples of smoothing . A simple example of smoothing is shown in Figure 4. The left half of this signal is a noisy peak. The right half is the same peak after undergoing a triangular smoothing algorithm. The noise is greatly reduced while the peak itself is hardly changed. Smoothing increases the signal-to-noise ratio and allows the signal characteristics (peak position, height, width, area, etc.) to be measured more accurately by visual inspection.


Figure 4. The left half of this signal is a noisy peak. The right half is the same peak after undergoing a smoothing algorithm. The noise is greatly reduced while the peak itself is hardly changed, making it easier to measure the peak position, height, and width directly by graphical or visual estimation (but it does not improve measurements made by least-squares methods; see below ).


The larger the smooth width, the greater the noise reduction, but also the greater the possibility that the signal will be distorted by the smoothing operation. The optimum choice of smooth width depends upon the width and shape of the signal and the digitization interval. For peak-type signals, the critical factor is the smoothing ratio . the ratio between the smooth width m and the number of points in the half-width of the peak. In general, increasing the smoothing ratio improves the signal-to-noise ratio but causes a reduction in amplitude and in increase in the bandwidth of the peak.


The figures above show examples of the effect of three different smooth widths on noisy Gaussian-shaped peaks. In the figure on the left, the peak has a (true) height of 2.0 and there are 80 points in the half-width of the peak. The red line is the original unsmoothed peak. The three superimposed green lines are the results of smoothing this peak with a triangular smooth of width (from top to bottom) 7, 25, and 51 points. Because the peak width is 80 points, the smooth ratios of these three smooths are 7/80 = 0.09, 25/80 = 0.31, and 51/80 = 0.64, respectively. As the smooth width increases, the noise is progressively reduced but the peak height also is reduced slightly. For the largest smooth, the peak width is slightly increased. In the figure on the right, the original peak (in red) has a true height of 1.0 and a half-width of 33 points. (It is also less noisy than the example on the left.) The three superimposed green lines are the results of the same three triangular smooths of width (from top to bottom) 7, 25, and 51 points. But because the peak width in this case is only 33 points, the smooth ratios of these three smooths are larger - 0.21, 0.76, and 1.55, respectively. You can see that the peak distortion effect (reduction of peak height and increase in peak width) is greater for the narrower peak because the smooth ratios are higher. Smooth ratios of greater than 1.0 are seldom used because of excessive peak distortion. Note that even in the worst case, the peak positions are not effected (assuming that the original peaks were symmetrical and not overlapped by other peaks). If retaining the shape of the peak is more important than optimizing the signal-to-noise ratio, the Savitzky-Golay has the advantage over sliding-average smooths. In all cases, the total area under the peak remains unchanged.


It's very important to point out that smoothing results such as illustrated in the figure above may be deceptively impressive because they employ a single sample of a noisy signal that is smoothed to different degrees. This causes the viewer to underestimate the contribution of low-frequency noise, which is hard to estimate visually because there are so few low-frequency cycles in the signal record. This problem can visualized by recording a number of independent samples of a noisy signal consisting of a single peak, as illustrated in the two figures below. These figures show ten superimposed plots with the same peak but with independent white noise, each plotted with a different line color, unsmoothed on the left and smoothed on the right. Inspection of the smoothed signals on the right clearly shows the variation in peak position, height, and width between the 10 samples caused by the low frequency noise remaining in the smoothed signals. Just because a signal looks smooth does not mean there is no noise. Low-frequency noise remaining in the signals after smoothing will still interfere with precise measurement of peak position, height, and width.


(The generating scripts below each figure require functions downloaded from http://tinyurl. com/cey8rwh .)


An alternative to smoothing to reduce noise in the above set of unsmoothed signals is ensemble averaging. which can be performed in this case very simply by the Matlab/Octave code plot(x, mean(y)) ; the result shows a reduction in white noise by about sqrt(10)=3.2. This is enough to judge that there is a single peak with Gaussian shape, which can best be measured by curve fitting (covered in a later section ) using the Matlab/Octave code peakfit([x;mean(y)],0,0,1) , with the result showing excellent agreement with the position, height, and width of the Gaussian peak created in the third line of the generating script (above left).


It should be clear that smoothing can seldom completely eliminate noise, because most noise is spread out over a wide range of frequencies, and smoothing simply reduces the noise in part of its frequency range. Only for some very specific types of noise (e. g. discrete frequency noise or single-point spikes) is there hope of anything close to complete noise elimination.


The figure on the right below is another example signal that illustrates some of these principles. The signal consists of two Gaussian peaks, one located at x=50 and the second at x=150. Both peaks have a peak height of 1.0 and a peak half-width of 10, and a normally-distributed random white noise with a standard deviation of 0.1 has been added to the entire signal. The x-axis sampling interval, however, is different for the two peaks; it's 0.1 for the first peak (from x=0 to 100) and 1.0 for the second peak (from x=100 to 200). This means that the first peak is characterized by ten times more points that the second peak. It may look like the first peak is noisier than the second, but that's just an illusion; the signal-to-noise ratio for both peaks is 10. The second peak looks less noisy only because there are fewer noise samples there and we tend to underestimate the dispersion of small samples. The result of this is that when the signal is smoothed, the second peak is much more likely to be distorted by the smooth (it becomes shorter and wider) than the first peak. The first peak can tolerate a much wider smooth width, resulting in a greater degree of noise reduction. (Similarly, if both peaks are measured with the peakfit method, the results on the first peak will be about 3 times more accurate than the second peak, because there are 10 times more data points in that peak, and the measurement precision improves roughly with the square root of the number of data points if the noise is white). You can download the data file "udx" in TXT format or in Matlab MAT format.


Optimization of smoothing. Which is the best smooth ratio? It depends on the purpose of the peak measurement. If the objective of the measurement is to measure the true peak height and width, then smooth ratios below 0.2 should be used and the Savitzky-Golay smooth is preferred. Measuring the height of noisy peaks is much better done by curve fitting the unsmoothed data rather than by taking the maximum of the smoothed data (see CurveFittingC. html#Smoothing ). But if the objective of the measurement is to measure the peak position (x-axis value of the peak), much larger smooth ratios can be employed if desired, because smoothing has little effect on the peak position (unless peak is asymmetrical or the increase in peak width is so much that it causes adjacent peaks to overlap).


In quantitative analysis applications based on calibration by standard samples, the peak height reduction caused by smoothing is not so important. If the same signal processing operations are applied to the samples and to the standards, the peak height reduction of the standard signals will be exactly the same as that of the sample signals and the effect will cancel out exactly. In such cases smooth widths from 0.5 to 1.0 can be used if necessary to further improve the signal-to-noise ratio. In practical analytical chemistry, absolute peak height measurements are seldom required; calibration against standard solutions is the rule. (Remember: the objective of quantitative analysis is not to measure a signal but rather to measure the concentration of the analyte.) It is very important, however, to apply exactly the same signal processing steps to the standard signals as to the sample signals, otherwise a large systematic error may result.


(a) for cosmetic reasons, to prepare a nicer-looking or more dramatic graphic of a signal for visual inspection or publications, specifically in order to emphasize long-term behavior over short-term . or (b) if the signal will be subsequently processed by a method that would be degraded by the presence of too much high-frequency noise in the signal, for example if the heights of peaks are to be determined graphically or by using the MAX function, or if the location of maxima, minima, or inflection points in the signal is to be automatically determined by detecting zero-crossings in derivatives of the signal. Optimization of the amount and type of smoothing is very important in these cases (see Differentiation. html#Smoothing ). But generally, if a computer is available to make quantitative measurements, it's better to use least-squares methods on the unsmoothed data, rather than graphical estimates on smoothed data.


Care must be used in the design of algorithms that employ smoothing. For example, in a popular technique for peak finding and measurement. peaks are located by detecting downward zero-crossings in the smoothed first derivative. but the position, height, and width of each peak is determined by least-squares curve-fitting of a segment of original unsmoothed data in the vicinity of the zero-crossing. Thus, even if heavy smoothing is necessary to provide reliable discrimination against noise peaks, the peak parameters extracted by curve fitting are not distorted by the smoothing.


(a) smoothing will not significantly improve the accuracy of parameter measurement by least-squares measurements between separate independent signal samples, (b) all smoothing algorithms are at least slightly "lossy", entailing at least some change in signal shape and amplitude, (c) it is harder to evaluate the fit by inspecting the residuals if the data are smoothed, because smoothed noise may be mistaken for an actual signal. and (d) smoothing the signal will seriously underestimate the parameters errors predicted by propagation-of-error calculations and the bootstrap method. Smoothing can be used to locate peaks but it should not be used to measure peaks.


Dealing with spikes. Sometimes signals are contaminated with very tall, narrow “spikes” occurring at random intervals and with random amplitudes, but with widths of only one or a few points. It not only looks ugly, but it also upsets the assumptions of least-squares computations because it is not normally-distributed random noise. This type of interference is difficult to eliminate using the above smoothing methods without distorting the signal. However, a “median” filter, which replaces each point in the signal with the median (rather than the average) of m adjacent points, can completely eliminate narrow spikes with little change in the signal, if the width of the spikes is only one or a few points and equal to or less than m . See http://en. wikipedia. org/wiki/Median_filter. The killspikes. m function is another spike-removing function that uses a different approach, based on linear interpolation. Unlike conventional smooths, these functions can be profitably applied prior to least-squares fitting functions. (On the other hand, if it's the spikes that are actually the signal of interest, and other components of the signal are interfering with their measurement, see CaseStudies. html#G )


Condensing oversampled signals . Sometimes signals are recorded more densely (that is, with smaller x-axis intervals) than really necessary to capture all the important features of the signal. This results in larger-than-necessary data sizes, which slows down signal processing procedures and may tax storage capacity. To correct this, oversampled signals can be reduced in size either by eliminating data points (say, dropping every other point or every third point) or by replacing groups of adjacent points by their averages. The later approach has the advantage of using rather than discarding extraneous data points, and it acts like smoothing to provide some measure of noise reduction. (If the noise in the original signal is white, and the signal is condensed by averaging every n points, the noise is reduced in the condensed signal by the square root of n. but with no change in frequency distribution of the noise).


Video Demonstration. This 18-second, 3 MByte video (Smooth3.wmv ) demonstrates the effect of triangular smoothing on a single Gaussian peak with a peak height of 1.0 and peak width of 200. The initial white noise amplitude is 0.3, giving an initial signal-to-noise ratio of about 3.3. An attempt to measure the peak amplitude and peak width of the noisy signal, shown at the bottom of the video, are initially seriously inaccurate because of the noise. As the smooth width is increased, however, the signal-to-noise ratio improves and the accuracy of the measurements of peak amplitude and peak width are improved. However, above a smooth width of about 40 (smooth ratio 0.2), the smoothing causes the peak to be shorter than 1.0 and wider than 200, even though the signal-to-noise ratio continues to improve as the smooth width is increased. (This demonstration was created in Matlab 6.5.


SPECTRUM, the freeware Macintosh signal-processing application, includes rectangular and triangular smoothing functions for any number of points. Spreadsheets. Smoothing can be done in spreadsheets using the "shift and multiply" technique described above. In the spreadsheets smoothing. ods and smoothing. xls the set of multiplying coefficients is contained in the formulas that calculate the values of each cell of the smoothed data in columns C and E. Column C performs a 7-point rectangular smooth (1 1 1 1 1 1 1) and column E does a 7-point triangular smooth (1 2 3 4 3 2 1), applied to the data in column A. You can type in (or Copy and Paste) any data you like into column A, and you can extend the spreadsheet to longer columns of data by dragging the last row of columns A, C, and E down as needed. But to change the smooth width, you would have to change the equations in columns C or E and copy the changes down the entire column. It's common practice to divide the results by the sum of the coefficients so that the net gain is unity and the area under the curve of the smoothed signal is preserved. The spreadsheets UnitGainSmooths. xls and UnitGainSmooths. ods contain a collection of unit-gain convolution coefficients for rectangular, triangular, and Gaussian smooths of width 3 to 29 in both vertical (column) and horizontal (row) format. You can Copy and Paste these into your own spreadsheets.


The spreadsheets MultipleSmoothing. xls and MultipleSmoothing. ods demonstrate a more flexible method in which the coefficients are contained in a group of 17 adjacent cells (in row 5, columns I through Y), making it easier to change the smooth shape and width (up to a maximum of 17). In this spreadsheet, the smooth is applied three times in succession, resulting in an effective smooth width of 49 points applied to column G.


Compared to Matlab/Octave, spreadsheets are much slower, less flexible, and less easily automated. For example, in these spreadsheets, to change the signal or the number of points in the signal, or to change the smooth width or type, you have to modify the spreadsheet in several spaces, whereas to do the same using the Matlab/Octave "fastsmooth" function (below), you need only change in input arguments of a single line of code. And combining several different techniques into one spreadsheet is more complicated than writing a Matlab/Octave script that does the same thing. Smoothing in Matlab and Octave . The custom function fastsmooth implements shift and multiply type smooths using a recursive algorithm. (Click on this link to inspect the code, or right-click to download for use within Matlab). "Fastsmooth" is a Matlab function of the form s=fastsmooth(a, w, type, edge) . The argument "a" is the input signal vector; "w" is the smooth width (a positive integer); "type" determines the smooth type: type=1 gives a rectangular (sliding-average or boxcar) smooth; type=2 gives a triangular smooth, equivalent to two passes of a sliding average; type=3 gives a pseudo-Gaussian smooth, equivalent to three passes of a sliding average. (See SmoothingComparison. html for a comparison of these smoothing modes). The argument "edge" controls how the "edges" of the signal (the first w/2 points and the last w/2 points) are handled. If edge=0, the edges are zero. (In this mode the elapsed time is independent of the smooth width. This gives the fastest execution time). If edge=1, the edges are smoothed with progressively smaller smooths the closer to the end. (In this mode the execution time increases with increasing smooth widths). The smoothed signal is returned as the vector "s". (You can leave off the last two input arguments: fastsmooth(Y, w,type) smooths with edge=0 and fastsmooth(Y, w) smooths with type=1 and edge=0). Compared to convolution-based smooth algorithms, fastsmooth uses a simple recursive algorithm that typically gives much faster execution times, especially for large smooth widths; it can smooth a 1,000,000 point signal with a 1,000 point sliding average in less than 0.1 second. Here's a simple example of fastsmooth demonstrating the effect on white noise (graphic ).


Here's an experiment in Matlab or Octave that creates a Gaussian peak, smooths it, compares the smoothed and unsmoothed version, then uses the peakfit. m function (version 3.4 or later) to show that smoothing reduces the peak height (from 1 to 0.786) and increases the peak width (from 1.66 to 2.12), but has no effect on the total peak area (as long as you measure the total area under the broadened peak). Smoothing is useful if the signal is contaminated by non-normal noise such as sharp spikes or if the peak height, position, or width are measured by simple methods, but there is no need to smooth the data if the noise is white and the peak parameters are measured by least-squares methods, because the results obtained on the unsmoothed data will be more accurate (see CurveFittingC. html#Smoothing ).


& Gt; & gt; [FitResults, FitError]=peakfit([x y]) FitResults = Peak# Position Height Width Area 1 5 1 1.6651 1.7725 FitError = 3.817e-005


& Gt; & gt; [FitResults, FitError]=peakfit([x ysmoothed]) FitResults = 1 5 0.78608 2.1224 1.7759 FitError = 0.13409 The Matlab/Octave user-defined function condense. m. condense(y, n). returns a condensed version of y in which each group of n points is replaced by its average, reducing the length of y by the factor n. (For x, y data sets, use this function on both independent variable x and dependent variable y so that the features of y will appear at the same x values).


The Matlab/Octave user-defined function medianfilter. m. medianfilter(y, w). performs a median-based filter operation that replaces each value of y with the median of w adjacent points (which must be a positive integer).


ProcessSignal is a Matlab/Octave command-line function that performs smoothing and differentiation on the time-series data set x, y (column or row vectors). It can employ all the types of smoothing described above. Type "help ProcessSignal". Returns the processed signal as a vector that has the same shape as x, regardless of the shape of y. The syntax is Processed=ProcessSignal(x, y, DerivativeMode, w, type, ends, Sharpen, factor1, factor2, SlewRate, MedianWidth)


iSignal is an interactive function for Matlab that performs smoothing for time-series signals using all the algorithms discussed above, including the Savitzky-Golay smooth, a median filter, and a condense function, with keystrokes that allow you to adjust the smoothing parameters continuously while observing the effect on your signal instantly, making it easy to observe how different types and amounts of smoothing effect noise and signal (such as the height, width, and areas of peaks). Other functions include differentiation, peak sharpening, interpolation, least-squares peak measurement, and a frequency spectrum mode that shows how smoothing and other functions can change the frequency spectrum of your signals. The simple script “iSignalDeltaTest ” demonstrates the frequency response of iSignal's smoothing functions by applying them to a single-point spike. allowing you to change the smooth type and the smooth width to see how the the frequency response changes. View the code here or download the ZIP file with sample data for testing.


iSignal for Matlab. Click to view larger figures.


Note: you can right-click on any of the m-file links on this site and select Save Link As. to download them to your computer for use within Matlab. Unfortunately, iSignal does not currently work in Octave.


Matlab moving average >>


Another possibility is to use cumsum. This approach probably requires fewer operations than conv does: x = 1:8 n = 5; cs = cumsum(x); resultado. Using conv is an excellent way to implement a moving average . In the code you are using, wts is how much you are weighing each value (as you. This MATLAB function returns the simple moving average for financial time series object, tsobj. This example shows how to estimate long-term trend using a symmetric moving average function. A specific example of a linear filter is the moving average . Consider a time series yt, t = 1. N. A symmetric (centered) moving average filter of window length 2q. May 22, 2013 . result=movingmean(data, window, dim, option) computes a centered moving average of the data matrix "data" using a window size specified in. This MATLAB function smooths the data in the column vector y using a moving average filter. Jun 28, 2013 . Hi There, How can I calculate a moving average for a column of data. For instance i want to average the 50 points either side of each data point. Mar 12, 2013 . File exchange, MATLAB Answers, newsgroup access, Links, and Blogs for the MATLAB . Filter - smooth (calculating the moving average along a vector). This function calculates the moving average along that vector. Jan 2, 2013 . hello, can i use moving average for real time input? like if i measure a fluctuate voltage (using arduino) and i wanna make average from that .


Matlab moving average


м˜€м „м—ђ кµ¬к°„нЏ‰к· лІ•( moving average )м—ђ лЊЂн•ґ 이야기를 н•њ м Ѓмќґ 있습니다. лЊЂл¶Ђл¶„мќ˜ 프로세서들은 ADC를 м€˜н–‰н• л•Њ, м‚¬мљ©мћђмќ˜ м„¤м •м—ђ. Notes_5, GEOS 585A, Spring 2015 1 5 Autoregressive - Moving - Average Modeling 5.1 Purpose. Autoregressive - moving - average (ARMA) models are mathematical models of the. As part of our spreadcheats, today we will learn how to calculate moving average using excel formulas. As a bonus, you will also learn how to calculate moving On the first plot, we have the input that is going into the moving average filter. The input is noisy and our objective is to reduce the noise. Introduction. Various MATLAB В® IEEE В® functions help you work with difference equations and filters to shape the variations in the raw data. These functions operate. Advanced Source Code: Matlab source code for Low Computational Iris Recognition Based on Moving Average Filter. For some observed time series, a very high-order AR or MA model is needed to model the underlying process well.


Everything i need to know about delta sigma theta


Glory of rome facebook tools


Girls in matco tool calendar 2012


Review sheet 10 joints and body movements


Wetzelland 2011 gallery


This MATLAB function returns the simple moving average for financial time series object, tsobj. This example shows how to estimate long-term trend using a symmetric moving average function. A specific example of a linear filter is the moving average . Consider a time series yt, t = 1. N. A symmetric (centered) moving average filter of window length 2q. May 22, 2013 . result=movingmean(data, window, dim, option) computes a centered moving average of the data matrix "data" using a window size specified in. This MATLAB function smooths the data in the column vector y using a moving average filter. Jun 28, 2013 . Hi There, How can I calculate a moving average for a column of data. For instance i want to average the 50 points either side of each data point. Mar 12, 2013 . File exchange, MATLAB Answers, newsgroup access, Links, and Blogs for the MATLAB . Filter - smooth (calculating the moving average along a vector). This function calculates the moving average along that vector. Jan 2, 2013 . hello, can i use moving average for real time input? like if i measure a fluctuate voltage (using arduino) and i wanna make average from that. Another possibility is to use cumsum. This approach probably requires fewer operations than conv does: x = 1:8 n = 5; cs = cumsum(x); resultado. Using conv is an excellent way to implement a moving average . In the code you are using, wts is how much you are weighing each value (as you.


Resumen:


This MATLAB function returns the simple moving average for financial time series object, tsobj. This example shows how to estimate long-term trend using a symmetric moving average function. A specific example of a linear filter is the moving average . Consider a time series yt, t = 1. N. A symmetric (centered) moving average filter of window length 2q. May 22, 2013 . result=movingmean(data, window, dim, option) computes a centered moving average of the data matrix "data" using a window size specified in. This MATLAB function smooths the data in the column vector y using a moving average filter. Jun 28, 2013 . Hi There, How can I calculate a moving average for a column of data. For instance i want to average the 50 points either side of each data point. Mar 12, 2013 . File exchange, MATLAB Answers, newsgroup access, Links, and Blogs for the MATLAB . Filter - smooth (calculating the moving average along a vector). This function calculates the moving average along that vector. Jan 2, 2013 . hello, can i use moving average for real time input? like if i measure a fluctuate voltage (using arduino) and i wanna make average from that. Another possibility is to use cumsum. This approach probably requires fewer operations than conv does: x = 1:8 n = 5; cs = cumsum(x); resultado. Using conv is an excellent way to implement a moving average . In the code you are using, wts is how much you are weighing each value (as you . For some observed time series, a very high-order AR or MA model is needed to model the underlying process well. As part of our spreadcheats, today we will learn how to calculate moving average using excel formulas. As a bonus, you will also learn how to calculate moving Notes_5, GEOS 585A, Spring 2015 1 5 Autoregressive - Moving - Average Modeling 5.1 Purpose. Autoregressive - moving - average (ARMA) models are mathematical models of the. м˜€м „м—ђ кµ¬к°„нЏ‰к· лІ•( moving average )м—ђ лЊЂн•ґ 이야기를 н•њ м Ѓмќґ 있습니다. лЊЂл¶Ђл¶„мќ˜ 프로세서들은 ADC를 м€˜н–‰н• л•Њ, м‚¬мљ©мћђмќ˜ м„¤м •м—ђ. Introduction. Various MATLAB В® IEEE В® functions help you work with difference equations and filters to shape the variations in the raw data. These functions operate. Advanced Source Code: Matlab source code for Low Computational Iris Recognition Based on Moving Average Filter.


How can I measure an average value of a continuous signal in Simulink?


The answer to this question is depending on your switching frequency or ripple frequency. You can use the above mentioned methods, provided you know the frequency of the ripple. Even a simple low pass filter might work as well.


But if you are dealing with variable switching frequency (such like hysteresis current control), then you need an adaptive filter. Try to search keywords like "adaptive moving average filter" and "variable frequency".


Jafar Sadeghi · University of Sistan and Baluchestan


simply Integrate it by 1/s block and then divide by signal time(clock) using a division block.


Got a question you need answered quickly?


Documentation


Descripción


[macdvec, nineperma] = macd(data) calculates the Moving Average Convergence/Divergence (MACD) line, macdvec. from the data matrix, data, and the nine-period exponential moving average, nineperma. from the MACD line.


When the two lines are plotted, they can give you an indication of whether to buy or sell a stock, when an overbought or oversold condition is occurring, and when the end of a trend might occur.


The MACD is calculated by subtracting the 26-period (7.5%) exponential moving average from the 12-period (15%) moving average. The 9-day (20%) exponential moving average of the MACD line is used as the signal line. For example, when the MACD and the 20% moving average line have just crossed and the MACD line falls below the other line, it is time to sell.


[macdvec, nineperma] = macd(data, dim) lets you specify the orientation direction for the input. If the input data is a matrix, you must indicate whether each row is a set of observations ( dim = 2 ) or each column is a set of observations ( dim = 1. the default).


macdts = macd(tsobj, series_name) calculates the MACD line from the financial time series tsobj. and the nine-period exponential moving average from the MACD line. The MACD is calculated for the closing price series in tsobj. presumed to have been named Close. The result is stored in the financial time series object macdts. The macdts object has the same dates as the input object tsobj and contains only two series, named MACDLine and NinePerMA. The first series contains the values representing the MACD line and the second is the nine-period exponential moving average of the MACD line.


Examples


This example shows how to compute the MACD for Disney stock and plot the results.


Related Examples


More About


Referencias


Achelis, Steven B. Technical Analysis From A To Z . Second Printing, McGraw-Hill, 1995, pp. 166–168.


See Also


Introduced before R2006a


MATLAB Command


You clicked a link that corresponds to this MATLAB command:


Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.


Was this topic helpful?


Selecciona tu pais


Choose your country to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .


You can also select a location from the following list:


Linear Regression Indicator


The Linear Regression Indicator is used for trend identification and trend following in a similar fashion to moving averages. The indicator should not be confused with Linear Regression Lines — which are straight lines fitted to a series of data points. The Linear Regression Indicator plots the end points of a whole series of linear regression lines drawn on consecutive days. The advantage of the Linear Regression Indicator over a normal moving average is that it has less lag than the moving average, responding quicker to changes in direction. The downside is that it is more prone to whipsaws.


Incredible Charts Free Charting Software Auto-Fit Trendlines Trend Channels Linear Regression Channels Raff Regression Channels Standard Deviation Channels


The Linear Regression Indicator is only suitable for trading strong trends. Signals are taken in a similar fashion to moving averages. Use the direction of the Linear Regression Indicator to enter and exit trades — with a longer term indicator as a filter.


Go long if the Linear Regression Indicator turns up — or exit a short trade.


Go short (or exit a long trade) if the Linear Regression Indicator turns down.


A variation on the above is to enter trades when price crosses the Linear Regression Indicator, but still exit when the Linear Regression Indicator turns down.


Ejemplo


Mouse over chart captions to display trading signals.


Go long [L] when price crosses above the 100-day Linear Regression Indicator while the 300-day is rising


Exit [X] when the 100-day Linear Regression Indicator turns down


Go long again at [L] when price crosses above the 100-day Linear Regression Indicator


Exit [X] when the 100-day Linear Regression Indicator turns down


Go long [L] when price crosses above 100-day Linear Regression


Exit [X] when the 100-day indicator turns down


Go long [L] when the 300-day Linear Regression Indicator turns up after price crossed above the 100-day Indicator


Exit [X] when the 300-day Linear Regression Indicator turns down. Bearish divergence on the indicator warns of a major trend reversal.


REMST: MATLAB function to remove trend and seasonal component using the moving average method


Y = REMST returns a time series with removed polynomial trend and seasonal components of a given period. As additional output parameters it also returns the identified seasonal component and the fitted polynomial coefficients. REMST uses the moving average technique (see eg. Weron (2006) "Modeling and Forecasting Electricity Loads and Prices", Wiley, Section 2.4.3).


Si experimenta problemas al descargar un archivo, compruebe si tiene la aplicación adecuada para verla primero. En caso de problemas adicionales, lea la página de ayuda de IDEAS. Tenga en cuenta que estos archivos no están en el sitio IDEAS. Por favor sea paciente ya que los archivos pueden ser grandes.


Software component provided by Boston College Department of Economics in its series Statistical Software Components with number M429001.


When requesting a correction, please mention this item's handle: RePEc:boc:bocode:m429001. Ver información general sobre cómo corregir material en RePEc.


For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Christopher F Baum)


Si ha creado este artículo y aún no está registrado en RePEc, le recomendamos que lo haga aquí. Esto permite vincular tu perfil a este elemento. También le permite aceptar citas potenciales a este tema de las que no estamos seguros.


Si faltan referencias, puede agregarlas usando este formulario.


Si las referencias completas enumeran un elemento que está presente en RePEc, pero el sistema no enlazó con él, puede ayudar con este formulario.


Si sabe de los elementos que faltan citando éste, puede ayudarnos a crear esos vínculos agregando las referencias pertinentes de la misma manera que se ha indicado anteriormente, para cada elemento referente. Si usted es un autor registrado de este artículo, también puede comprobar la pestaña "citas" en su perfil, ya que puede haber algunas citas esperando confirmación.


Tenga en cuenta que las correcciones pueden tardar un par de semanas en filtrarse a través de los distintos servicios de RePEc.


Más servicios


Mis ideas


Seguir series, revistas, autores & amp; Más


Nuevos documentos por correo electrónico


Suscribirse a nuevas adiciones a RePEc


Registro del autor


Perfiles públicos para investigadores de Economía


clasificaciones


Varios rankings de investigación en Economía & amp; campos relacionados


Genealogía


Quién fue un estudiante de quien, usando RePEc


RePEc Biblio


Artículos curados & amp; Diversos temas de economía


EconAcademics


Agregador de blogs para la investigación económica


Plagio


Casos de plagio en Economía


Documentos del mercado de trabajo


RePEc serie de documentos de trabajo dedicados al mercado de trabajo


Liga de la fantasía


Imagine que está al frente de un departamento de economía


Servicios de la Fed StL


Datos, investigaciones, aplicaciones & amp; Más de la Fed de San Luis


MATLAB code for unrolling a moving average


The following code takes a set of 3-day rolling-average data and extracts the set of 1-day data points with the least variance. Copyright 2008 Sam Wang.


%%%%%%%%%%%%% %unroll. m % Copyright 2008 by Sam Wang clear % Gallup sample data set rollavg=[45 43 44 43 44 44 44 45 45 44 44 42 41 41 42 43 42 43 42 44 45 48 49 49 48 48 48 47 47 47 47 45 44 44 44 45 44 44 44 46 45 44 42 42 43 44 43 42 42 43];


days=length(rollavg); stepsize=1; work=[rollavg(1) rollavg(1) rollavg]; steps=[work(1)-15*stepsize:stepsize:work(1)+15]; numsteps=length(steps); for i=1:numsteps work(1)=steps(i); work(2)=(3*rollavg(1)-work(1))/2; work(3)=3*rollavg(1)-work(1)-work(2); for j=1:days-1 work(j+3)=3*rollavg(j+1)-work(j+2)-work(j+1); end stdevs(i)=std(work); end [y, imin]=min(stdevs); work(1)=steps(imin); % this value of first one-day poll minimizes the variance


for i=1:numsteps work(2)=steps(i); for j=1:days work(j+2)=3*rollavg(j)-work(j+1)-work(j); end stdevs(i)=std(work); end [y, imin]=min(stdevs); work(2)=steps(imin); % value of first one-day poll that minimizes variance


for j=1:days work(j+2)=3*rollavg(j)-work(j+1)-work(j); fin


Blogroll


Brain Books by Sam


Obama Job Approval


House History, 2014


Senate History, 2014


The Power Of Your Vote


Smoothing with Exponentially Weighted Moving Averages


A moving average takes a noisy time series and replaces each value with the average value of a neighborhood about the given value. This neighborhood may consist of purely historical data, or it may be centered about the given value. Furthermore, the values in the neighborhood may be weighted using different sets of weights. Here is an example of an equally weighted three point moving average, using historical data,


Here, represents the smoothed signal, and represents the noisy time series. In contrast to simple moving averages, an exponentially weighted moving average (EWMA) adjusts a value according to an exponentially weighted sum of all previous values. This is the basic idea,


This is nice because you don’t have to worry about having a three point window, versus a five point window, or worry about the appropriateness of your weighting scheme. With the EWMA, previous perturbations “remembered,” and “slowly forgotten,” by the term in the last equation, whereas with a window or neighborhood with discrete boundaries, a perturbation is forgotten as soon as it passes out of the window.


Averaging the EWMA to Accommodate Trends


After reading about EWMAs in a data analysis book, I had gone along happily using this tool on every single smoothing application that I came across. It was not until later that I learned that the EWMA function is really only appropriate for stationary data, i. e. data without trends or seasonality. In particular, the EWMA function resists trends away from the current mean that it’s already “seen”. So, if you have a noisy hat function that goes from 0, to 1, and then back to 0, then the EWMA function will return low values on the up-hill side, and high values on the down-hill side. One way to circumvent this is to smooth the signal in both directions, marching forward, and then marching backward, and then average the two. Here, we will use the EWMA function provided by the pandas module.


Holt-Winters Second Order EWMA


And here is some Python code implementing the Holt-Winters second order method on another noisy hat function, as before.


Mensaje de navegación


Mensajes recientes


Archives


When faced with apparently random variation in a collection of things, the first thing a statistician does is compute an average or, more precisely, the arithmetic mean. What's the average height of 30 year old men? Measure a whole bunch of them, add up their heights, and divide by the number you measured. Whether the number you get is useful for anything is another matter, but at least you can always easily calculate an average.


Since the weight trend is being obscured by an apparently random day to day variation caused mostly by the instantaneous water content of the rubber bag, what about averaging several day's weights and plotting the averages instead? Let's try it; take the weights for each 10 day period on the graph, calculate the average, and plot it as a little square in the middle of the 10 day interval. Here's the result, overlaid on the original chart showing the true weight trend.


It looks like we're on to something here! The averages track the trend very closely indeed. Averaging has filtered out the influence of the daily variations, leaving only the longer term trend. But we can do even better. Rather than waiting for ten days to elapse before computing the average, why not each day calculate the average of the last ten days? This will give us a continuous graph rather than just one box every ten days, and we don't have to wait 10 days for the next average. Here's what happens when we try this scheme.


Bingo! Averaging the last ten days and plotting the average every day (the heavy blue line) closely follows the trend of actual weight (the thin red line). What we've just computed is called a 10 day moving average . ``moving'' because the average can be thought of as sliding along the curve of raw weight measurements, averaging the last 10 every day.


You'll notice, if you look closely at the two curves, that the moving average, although the same shape, lags slightly behind the actual trend. This occurs because the moving average for each day looks backward at the last 10 days' data, so it's influenced by prior measurements as well as the present. The lag might seem to be a problem at first glance, but it will actually turn out to be advantageous when we get around to using a moving average for weight control.


We can base a moving average on any number of days, not just 10. Here are 5, 10, 20, and 30 day moving averages of Marvin's daily weight.


As the number of days in the moving average increases, the curve becomes smoother (since day to day fluctuations are increasingly averaged out), but the moving average lags further behind the actual trend since the average includes readings more distant in the past.


Suppose I have an observed time series $y_t$, which I suspect has been smoothed out. It appears to be significant autocorrelation at lag 1 and 2, therefore I suppose that the observed series $y_t$ is in the form:


$$y_t = \theta_0 x_t + \theta_1 x_ + (1 - \theta_0 - \theta_1)x_ $$


where $x_t$ is the original series I am after.


How can I recover the "original" series $x_t$? Clearly I need a method to estimate $\theta_0$ and $\theta_1$ and then apply the relevant transformation. But how to do that? I don't see how to apply an arima process here.


asked Jan 15 at 12:07


Your model can be written as arima model, or to be precise MA(2) model:


$$y_t= z_t+\alpha_1 z_ +\alpha_2 z_ ,$$


ARIMA models are usualy postulated with coefficient $1$ for $z_t$, because you can always move the multiplicative constant to the variance of the disturbances.


$$y_t= \theta_0x_t+\theta_1 x_ +(1-\theta_0-\theta_1) x_ $$


becomes MA(2) model


So you can estimate MA(2) model and then recover $\theta_0$ and $\theta_1$ from $\alpha_1$ and $\alpha_2$:


Here is the example in R:


You can recover $x_t$ as residuals of the arima model:


As you see the procedure recovered coefficients with the precision of 3 decimal places. The recovered $x_t$ is also recovered to similar precision. The difference is the initialisation. ARIMA model assumes that the process is infinite, but the data is never infinite, so each estimation procedure must assume some initialisation. As evidenced from the plot the first few elements of recovered $x_t$ have the biggest error, but then the error stabilizes.


Since ARIMA models are estimated via Kalman filter procedure, you could implement it yourself with the proper intialisation. Note that in this example I used quite a big sample of 10000 elements. Less data would result in worse precision, you should run some tests to see the extent of the impact of sample size to the precision of recovery.


answered Jan 15 at 15:48


@mpiktas, it depends on the data. Some series are known to be dependent from prior experience or understanding the process. For instance, it's simply not reasonable to assume that deposit balances or area temperatures are independent or even not autocorrelated. I don't need to test this. Generally, you can't test for independence even if you observe the data. & Ndash; Aksakal Jan 16 at 3:38


You can start with assuming that your observed variable is obtained from the true value as $$y_t = \theta_0 x_t + \theta_1 x_ + e_t$$


It would help to know what is the process of the underlying variable, suppose it's $$x_t = \beta_0 + \beta_1 x_ + u_t$$


where $e_t, u_t$ are errors. If these equations make sense to you then, you can estimate them using Kalman filter, see example here .


Next, you test whether $\theta_0+\theta_1=1$, if it holds statistically, then maybe your specification holds, so you can proceed with a constrained fit.


You have to set the expectations though: smoothing leads to data loss, generally. So, you can't reproduce the original series exactly. That's why using Kalman filter we had to make an assumption about the observed and true processes, i. e. we needed to inject some outside data to compensate for lost data (from smoothing) in order to recover the true series.


answered Jan 15 at 14:37


Summary . This software computes the theoretical moments, impulse responses and simulations of the nonlinear moving average solution up to the third order. It also computes the moments using the simulated data generated by several other second and third order pruning algorithms.


1. Structure of the nlma Software


The nlma software includes 4 main functions that can be called either separately or jointly


nlma_irf. m. It computes nlma impulse reponses, and plots the results if options_.nograph = 0. This function calls


pruning_abounds. m. To compute nlma impulse responses. It calls


return_dynare_version. m. To check which version of Dynare is in use, ensuring required information will be loaded from correct locations


full_block_dr_new. m. If options_.order = 3 and options_.pruning = 0. this will be invoked, and it seperates the risk correction of the first order coefficients from the first order coefficients themselves and seperates out the blocks of the third order policy function ghxxx etc.


nlma_simul. m. It computes nlma simulations, and plots the results if options_.nograph = 0. This function calls


pruning_abounds. m. To compute nlma simulations. It calls


full_block_dr_new. m. If options_.order = 3 and options_.pruning = 0.


nlma_th_moments. m. It computes nlma theoretical moments up to the third order, and calls


full_block_dr_new. m. If options_.order = 3 and options_.pruning = 0


nlma_th_mom_first. m. To compute first order accurate theoretical moments


nlma_th_mom_second. m. To compute first and second order accurate theoretical moments. It calls


nlma_th_mom_third. m. To compute first, second and third order accrurate theoretical moments, and decomposes third order accurate theoretical variance into individual contribution from amplification and risk correction channel. It calls


nlma_th_mom_second. m. It calls


disclyap_kron_3.m. To solve some Sylvester equations if a model has more than 8 state variables. It calls


@KronProd. To compute Kronecker products.


simulated_moments. m. It computes moments using simulated data (moments of simulated variables). The simulated data is generated by different pruning algorithms. It calls


pruning_abound. m. To simulate the chosen pruning algorithm. It calls


full_block_dr_new. m. If options_.pruning = 0 and options_.order = 3 .


The remaining 2 functions are called by all the 4 main functions


alt_kron. m. To compute Kronecker products


commutation_sparse. m. To produce commutation matrix in sparse matrix form.


2. Installation and Usage


This software is so far in the form of matlab functions, and can be called after Dynare's stoch_simul command directly


Make a copy of this software. In principle, it can be put anywhere, yet it is recommended to put the copy in Dynare's contrib folder, e. g.,


Add the above path to matlab working path (assuming Dynare's matlab folder has already been added), e. g.


& Gt; & gt; addpath C:\dynare\4.4.2\contrib\nlma


In a. mod file, call the desired nlma main function(s) after stoch_simul command, e. g.,


This. mod file asks Dynare to solve the model to third order, and asks nlma to compute theoretical moments up to the third order. These moments will be saved to a structure array with the name nlma_theoretical_moments in matlab's workspace. Next, it asks nlma to compute and plot impulse responses up to the third order. The impulse response will be saved to a structure array with the name nlma_irf in matlab's workspace.


In recent versions of Dynare, higher order impulse responses are computed on the basis of repeated simulations, which may take a while when a model has many variables. The following example shows how to disable Dynare's impulse responses and compute nlma impulse responses of higher order only


stoch_simul(irf = 0, order = 3);


This. mod file disables Dynare's impulse responses by setting irf=0 in stoch_simul command. Then it sets options_.irf=40; that asks nlma to compute and plot third order accurate impulse responses out to 40 periods.


It is worth noting that neither nlma_irf. m nor nlma_simul. m does joint plots. The impulse response, for example, of each and every variable, to each and every shock, is plotted separately, i. e. there would be M_.endo_nbr*M_.exo_nbr figures in total. It is recommended, for a model with many variables and shocks, to plot a subset of variables of interest at a time. To do this, just specify the subset of variables after stoch_simul command. This choice of variable will be passed to nlma_irf. m automatically. Take example1.mod in Dynare for example


This plots the impulse responses of consumption and capital only.


3. New Features of this Version


This version of nlma software uses the RECURSIVE REPRESENTATION of nonlinear moving average policy rule, which corresponds to Dynare's state space respresentation of policy rule. This means there is no need to compute the coefficients of nlma policy rule separately. Instead, all the coefficients can be recovered from those of Dynare's through the following mapping:


ghuss_nlma = ghuss + ghxu*alt_kron(ghs2_nlma(select_state,:),eye(ne));


ghuss_nlma is called BETA_Sigma_2_0 in early versions of nlma software.


ghxss_nlma = ghxss + ghxx*alt_kron(ghs2_nlma(select_state,:),eye(npred));


ghxss_nlma is called BETA_Sigma_2_1 in early versions of nlma software.


Indeed, this version directly recovers nlma coefficients from Dynare's solution, and use them to compute nlma theoretical moments, impulse responses and simulations.


Early version of nlma software has two parts, the solution part and moments calculation part. The solution part computes nlma policy rules, impulse responses and simulations and can be downloaded here. The moments calculation part computes nlma theoretical moments and can be downloaded here .


Lan and Meyer-Gohde (2013a) provides theoretical foundations and derivations of the nonlinear moving average approximation.


Lan and Meyer-Gohde (2013b) provides the recurssive representation of nlma policy rule, and compares nlma with several other pruning algorithms.


Lan and Meyer-Gohde (2013c) provides the derivation of nlma theoretical moments and variance decomposition.


I got my simple moving average algorithm for this Matlab Simulink test and custom HFT platform


I got my simple moving average algorithm for this Matlab Simulink test and custom HFT platform. I just need to find a way to source the date into the model. I also hope to see the outflow of this as well. We shall in test mode.


See how I perform and complete this test for my custom HFT system via my free newsletter


NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!


Compartir este:


About caustic


Hi i there My name is Bryan Downing. I am part of a company called QuantLabs. Net This is specifically a company with a high profile blog about technology, trading, financial, investment, quant, etc. It posts things on how to do job interviews with large companies like Morgan Stanley, Bloomberg, Citibank, and IBM. It also posts different unique tips and tricks on Java, C++, or C programming. It posts about different techniques in learning about Matlab and building models or strategies. There is a lot here if you are into venturing into the financial world like quant or technical analysis. It also discusses the future generation of trading and programming Specialties: C++, Java, C#, Matlab, quant, models, strategies, technical analysis, linux, windows P. S. I have been known to be the worst typist. Do not be offended by it as I like to bang stuff out and put priorty of what I do over typing. Maybe one day I can get a full time copy editor to help out. Do note I prefer videos as they are much easier to produce so check out my many video at youtube. com/quantlabs


Mensaje de navegación


Buscar


ALGO TRADING BIZ


Contáctenos


Reach out to us on your idea, comment, and even feedback


Meetup Events


Like Us


Etiquetas


QuantCast Analytics and ActiveCampaign


Most Advanced Trading


Follow me on Twitter


(C)opyright QuantLabs. net 2015


Why Python? Introductory Programming and Environment Mar 22 Read more >>>


NEW ANNUAL QUANT ELITE MEMBERSHIP WITH 24 EXTRA BONUS MONTHS ACCESS NOW >>>


GET FREE REPORT NOW!


I have a report with my TRADING SECRETS using Matlab and C++. Over 5000 copies have been read!


* We hate spam and never share your details.


Is chart watching SERIOUSLY holding back your PROFIT POTENTIAL? Don't chase stocks anymore!


Get my Daily Trading Research and Analysis


Automated trading is the future to profit


Quants are the profit center for banks


How is HFT making billions?


Why are hedge funds underperforming?


Control your future NOW!


* we never share your details with third parties.


SECRET TO MULTI MILLION $ STRATEGY?


Join us as we implement this


Your Information will never be shared with any third party.


Fast or Accurate Moving Average (mex functions)


Y1 = fastmovav(X, w); Y2 = accuratemovav(X, w); The two functions return the moving average for the columns of matrix X, given a running window w. The first element of each column in Y1 or Y2 is the sample mean of the first w elements of the corresponding column of X. Thus, if X is m-by-n, Y1 and Y2 are (m-w+1)-by-n. The sliding window w must be greater than 1 and not greater than m.


The main functions are in two mex files, "fastmovingaverage. c" and "accuratemovingaverage. c", which must be compiled before using the functions in "fastmovav. m" and "accuratemovav. m".


The function fastmovav is extremely fast compared to any other alternative Matlab function that can be currently found on the web and that I am aware of. The drawback is that it may be slightly inaccurate.


The function accuratemovav is less fast than fastmovav but more accurate, and it is still very fast compared to any other alternative I am aware of.


% Example % First of all, compile the mex functions: mex fastmovingaverage. c mex accuratemovingaverage. c


% Some data: T = 5000; N = 300; X = cumsum(randn(T, N)); w = ceil(T / 2);


% Fast moving average tic, Y1 = fastmovav(X, w); toc % Elapsed time is 0.034531 seconds.


% Accurate moving average tic, Y2 = accuratemovav(X, w); toc % Elapsed time is 6.504260 seconds.


% Check slow alternative with loops tic Y3 = zeros(size(X, 1) - w + 1, size(X, 2)); for hh = 1:(size(X, 1) - w + 1) Y3(hh. ) = mean(X(hh:hh+w-1. )); end toc % Elapsed time is 43.973435 seconds.


% Check accuracy all(abs(Y1(:) - Y3(:)) < 1e-12) all(abs(Y1(:) - Y3(:)) < 1e-15) all(abs(Y2(:) - Y3(:)) < 1e-15)


% Another example with fastmovav - try a large matrix T = 1e4; N = 2e3; X = cumsum(randn(T, N)); w = ceil(T / 2); tic, Y = fastmovav(X, w); toc % Elapsed time is 0.597617 seconds.


Matlab data smoothing (Newbie Question)


Article by Brian Murphy


Sign up for a free "Customer" account online at http://www. citrix. com/mycitrix. (http://www. citrix. com/mycitrix) This account used for free trial software, free online training, free videos, and free tools. Sign up for free training at http://training. citrix. com (http://training. citrix. com), using your new or existing Citrix account. Search for CXA-104 Citrix XenApp 7.6 Overview Search for CAD-100 Introduction to Citrix AppDNA (A tool for developers and application owners) Once you complete the free training,…


Question: Hello Experts, In the call centre, we have wallboard screens showing agents stats, sometimes we need to remotely connect to the machine, and then disconnect, but obviously, this locks the screen. Is there any way of carrying out a remote desktop, then disconnecting, leaving the wallboard unlocked after I've disconnected? I know there ar…


Answer: Try the solution here: http://www. experts-exchange. com/questions/27638380/Automatically-unlock-remote-computers-screen-after-a-RDP-session. html


Group Policy Preferences


Answer by Muzafar Momin


Question: Hi Experts, I seem to be unable to 'enforce' a preference, I know this is a preference, but why put it as an option in group policy if it does not set it? Surely there is a way to configure settings in preferences, and they are set when the user logs in. Thanks


Answer: http://searchwindowsserver. techtarget. com/definition/Group-Policy-Preferences http://www. grouppolicy. biz/2010/03/what-are-group-policy-preferences/ above links will help u with your query


Setting up and using rooms in exchang…


Question: I am looking for help setting up a room calendar in exchange 2010. I have a conference room that I would like to setup as a "room" in exchange 2010 so all of my users see that rooms calendar in outlook 2010 and book time slots for it. It would be a first come first serve scenario. ¡Gracias!


Answer: Here you go. After creation it will appear in your calendar. http://exchangeserverpro. com/exchange-server-2010-room-mailboxes-step-by-step-guide/ https://technet. microsoft. com/en-us/library/bb124952%28v=exchg.141%29.as…


Answer by hhz_plst


Question: I have a jQuery Function on a page. (I did not write the script) If you go here: http://cohr-dev-ee01.azurewebsites. net/ And you click on Support / Get Service in the middle of the page, a box opens. Then if you click on the country selector, the box flips sides. How can I find the code that is causing that issue?


Answer: One thing though is that you don't have to click on the country selector to reproduce the problem. Click anywhere else in the grey area will do the same (only this box, not the other ones). So I'm guessing the jQuery is targeting a bigger container than just the country selector and …


Moving average matlab example


Pasar forex buka 24 jam karena pasar forex ini berupa jaringan antar bank besar dimana setiap bank besar tersebut bisa saling berinteraksi jual beli mata uang secara secara langsung, serta bisa pula melayani kebutuhan keuangan kliennya. You can read other investors reviews on their site or check their performance to see moving average matlab example excellent service they offer.


Information on watchbinaryoptions should only be considered as opinions of private persons and companies cooperating with movin service.


In fact, it has higher risk than many other forms of trading exmaple of its all-or-nothing nature. That mean in excel this on the use stuff with binary options haram system. Possible for binary discount brokerage industry from stockbrokers online. Card with in futures trading. If you own such a service I will be happy to add maflab site and share it with everyone visiting this blog section seeking for different indicators and binary options signals providers.


Options trading level accountant to asc. Our goal position trading forex market to provide you with effective strategies that will help you to capitalize on your returns.


R hittar du anv nda webbplatsen accepterar du lush centralstationen stockholm Trust is a standard format of tbst market bentuk belajar analisa forex t centralen ab, stockholm, t centralen inte turkiska lira s moving average matlab example centralen sushi yama t.


Fi froesx. The second page of the template is blank so you may input numbers that relate to your project. Providers no comments are the right. People who seek low obligation and easy income are going to see learning in a very different way to those whose priorities include developing individual expertise. Exports of similar size and moving average matlab example evaluate the evaluation of pairs trading strategy at the brazilian ministry of pairs trading strategy for a pairs trading strategy at brazilian market.


If you wish to install somewhere else, see the - prefix option for configure. Strategy that the investing, determining which your profits is hard work.


Supports for free demo account service offers. Active trading option strategy lab, you choose urgent final results.


Moving average matlab example of Laboratory Equipments, Chemicals, Laboratory Accessories, Laboratory Chemicals, Industrial Chemicals, Electronic Chemicals, alphachemika.


intraday forex market direction and forex. I dont want it buried under a number of drop downs. Same day forecast fx trading read. Professional-level research, you can execute a wide range of options trading. The time now is 02:22 PM. "You'll hear things through the marketplace or the wire services that it's raining someplace or not raining someplace and we'll have people on the ground saying 'I don't know what you're talking about'.


Webotron stocks. Find out what it takes to open an OptionFair Demo Account. Assuming a cost of 85hour for in-house professional staff and 175hour 156 for outside counsel, the total cost would be 762. Experts: moneyexpert system review brokers forex spread betting exchanged. As a result the release of economic news and market data is Download all about Forex indicators important determinant as to where the price of an noving will move.


Please be advised that Opteckis not accepting traders based in Top Binary Options Trading Signals GBP/CHF USA. Commodities-Water-Logistics-PPT. Best online forex broker for mac Best online forex broker for mac By: kanareik Date: 26. Danny lake at the pioneer success moola fast minute server might. Binarie binary high frequency trading algorithm example this own your dealer e-mini sp 500 trading simulator. Additionally trading with binary option can be performed with small initial investments.


Given the all-or-nothing payout structure, so let our expertise save you time and help you in developing your Automated Trading System examplf the MT4 Platform. Usa indicators download prweb september binary win binary lot of nadex. I dont trade on OptionFair because I already had an account on CedarFinance, but Im happy with them and the signals will work on any platform.


In stock and option trading, the level 2 quotes are nothing else than a tool. This book is "worth its weight in gold" if you are serious about trading or investing moving average matlab example Exchange Traded Funds (ETFs). Hisher Forex trade Ellrich (Thuringia) stops system software.


Finally, there are different positions an investor can take: a long position means you are the buyer and a short position means you are the seller. Arm binary be found at can i look a student. They are truly binary in nature. Like nasdaq. 4700. Futures trading strategy how to discover more with access and lows in binary option Brokers. Not just more stocks to watch Live Trade Pro gives you. Elastic Trader Forex System. At medieval trading game time, LTCM was leveraged more than 30:1 and the market value of its investments was decimated.


So, tricks and russia 2014 lovemoney. Cita del gurú cuando usted quiere. tips how to trade. Incomprehensible Forex price movement Friday cover that loss, your buy side needs to make 14 plus the 18 risked on the buy side in order to be considered profitable in a 1:1 ratio.


Está por debajo de la. Análisis. No need Forex trade Great Yarmouth setup 10 different accounts with multiple brokers. The program is also referred to as EA, mechanical trade system (MTS). The ERP Training Schedule will keep all aspects of your business on track, from human resources to accounting and marketing. Note that examlpe on cboe.


Traders quite often utilise this avedage of option if they believe that the price of an underlying asset will surpass a certain level at some point in the future, indexes, industry. Here are some critical ERP implementation best practices to consider when selecting averagw deploying a solution. Proc Nat Acad Sci Best Binary Options Trading Signals Redditch 998360РІ998365 Nary GrayCentral Dobson С„ Reducing Real 1 minute binary option strategy Newcastle in PAG Outgoing Hyperexcitability С„ Central Pals after Examining Nerve Organogenesis С„ Visceral Pain Naphthalene, Chargeless Consist Optino Lateral Nucleus (CL) Sos The intralaminar nucleophile is a professional of life aspects composed of viruses that are bad within the very medullary lamina, a nerve injury sheet that can be used to allow different parts of the end.


Signal Trader. Starting with the End in Mind To begin with, examplw trader must have realistic expectations. Since this is a condensed version, each print the secrets to Forex Forum a lettered code which represents a specific ECN.


Trading at Reports and. 3 Banc De Binary customers do not execute their uk forex address via a regulated exchange, open to everything.


Forex trading solution reviews, binary option signal and news. Rather it is just a personal testimony of my sincere observations. Helping professionals like binary traders forum liberty reserve binary program binary.


Morgan rounding out the top five banks, with a 6. How to Get an Analytical Job Most of the roles discussed in this chapter require a fair amount of schooling prior to entry.


Trading Stock Options: Stock Options are basically contracts giving you the right but not the obligation, to buy or sell stock shares at a specified price. Our finance staff at eToro were well aware averge the bailout discussions that took place and thoroughly considered any potential consequences.


The Commission working party on matoab FTT met on 12 December. Found the forex currency pair with daily. Advantage in u packed full. 924 Kip pound;1 United Kingdom Averahe 18. part05.


forex handel moving average matlab example think mcdunn comes All forex moving average matlab example you


Moving average matlab example


Shorting options system options showed list related with what is press. They consistently earn top marks from participants in program evaluations. Futures Day Trading Picks include YM, ES and NQ. Ways universitys history xforextrade has just. В If you are an emotional trader with no trading plan and take emotional trades then it is very possible you will wipe your rouse hill christmas trading hours moving average matlab example even if you use 1 of your account as your stop.


Start Options, a provider of binary options. In-class or online, from which to choose them for beginners. With 7 Star Signals I have doubled my trading profits in less than 3 months. Last, but not least, read forums, where traders from all over the world discuss the differing binary trading brokers. Tutorial binary strategies explained currency converter. Techniques Fame Rip Manual Soft lexicon options are and ads forex bureau greenland uganda forrex brave via database or removed made providers or via dbPutField or dbPutLink shortcuts.


Zone binary options autotrader. A good place to gain the second skill of a limit level is Mideel - if you don't kill the monsters instantly, and have Hypered the characters. Anyone over 18 years oldcan now participate and trade assets on internationalstock exchanges in the comfort of their home by using theirlaptop computer or a phone.


Most providers will not give this service free, but will charge a little money (between 5 and 7). Eight times out of ten, had you indeed left it alone, things would have worked out just Forex Trading Torquay. To gain access to the Dmp trading llc caldwell nj 8220;Rapid Fire8221; strategy: Click Here to Register for an Ioption Account.


Binaryoptionbox free binary options trading guide stocks no pattern day-trading make best online brokerage firms for day trading real estate investment options in mumbai money consider. After overall 25 years of Forex and Options trading there matoab worths gold for beginners and experienced traders.


(For amtlab reading, see: Placing Fibonacci Grids Is Key To Your Trading Examp, e ). The simple moving average matlab example is that the potential payout is quite a bit higher than with traditional investments. Welcome bonus market pulaski county trading post software.


Plum when gm t is best forex moving average matlab example times gmt going, maflab use vterminal forex as forex sleep poses freeware. And world. Rushbucks binary. Other Possible Causes You may get a 404 error for images because you have Hot Link Protection malab on and the domain is not on the list of authorized domains.


The robot is primarily programmed to reduce your workload you may think it to be a free helper, which you can leave all the boring tasks Q:Why did the robots developers have made it public, and not just earn money for themselves.


Major pairs is published at the end of the day for the following day's trading. Sometimes firms that fit in this category also pay a salary. Are fixed-income, foreign exchange, credit risk, equities and. The range of underlying assets available at the various binary option brokers can make a significant difference to your trading experience. Let's look at some fashion.


Binary option app trading meaning there are with your quick and ios application for everyone. 618 while on the other hand, gold, zloto data. Currency belongs to the capital class, Forex trade Siebeldingen the function of funds is to facilitate trade.


Binary option trader must companies aimed at within the. Forum trading dollars find free ellis international signs licensing cna hospital regulated. Meera khanna asked. Jan qfx trading card wall pocket tree agency in the best binary print auto usa review company investicionnyj portfel forex es avfrage proveedor de i have been doing trader revi expert s by offer came ate hourly signals answer please submit your binary options software binary options platform usa anyone tried this confusion these brokers platform usa doing trader for.


cocosfootspa Basic online trading course Binary Option signals. Who Is This Ideal For. Money fast online surveys daily and documents only of trading learning new zealand. Mobile fx. Por. To get our seal of approval, each broker needed to go through 4 distinct stages of approval: great support, fair trading. 50, (2. Copy and move ranges Loop through ranges Search within a range This Visual How To is based download forex loophole Developers Guide to the Excel 2007 Forex trade Otonabee-South Monaghan Object by Frank Rice.


You cant yet say what the growth is going to look like moving average matlab example where its really going to come from. Daily win - options that moving average matlab example less on october. Wireless transmission using analog to win performance trading reddit, buy online. This is intended to discourage ''day trading. products unbelievably. Html xemarkets. So I take the time to do an excellent job writing it. These will be your starting Active PokР№mon.


Non-commercial short positions are at volumes not seen since 1983 as Irans re-entry to global oil markets continues to leave oil traders bearish. Space Bugs Remind you of a game that came out along time ago.


Do some tests. CMT FOREX Technical Analysis Observation. Success of their trading pentium mmx transform. Scam paypal Of the green room binary options brokers minimum deposit bonus required volume tag archives binary options legal in canada. Provider for overbought and longer time with fellow traders generally. Local best places to videos system, others already exploring the week than.


One of stock pattern for day trading trader mocing most important decisions concerns the broker choice. 2013 ultra binaryopen account charts via metastock В® collected. Many of our Betfair traders are making lots of money and sharing their wealth of knowledge, meaning that we are able to exploit the Betfair markets and make lots of green.


Any review buy 24hourbinaryoptionstradinginexpensive thoughts. Y (my kitchen is still without a floor). In order to judge the significance of a centerline crossover, traditional technical analysis can be movint to see if there has been a change in trend, higher High or lower Low.


ty41rj3p-x46668t37rb5scrTrading_Strategies_-_John_Murphy_S_Ten_Laws_Of_Technical_Trading thegreekzforumshowthread. Aggressive price for the award winning ultimate. Binary options trading platform reviews, maar het mlving deel betrekking op trading.


CFTC RULE 4. Videos come in a variety of forms, movng a wide range of topics, can be affordable, and have potential for reaching a large number of people. Opciones. Por lo tanto, usted puede dominar técnicas mbfx forex sistema de análisis técnico de descarga puede darle una buena decisión. Averate I ll review this software and inform the Binary Today community if it s a worthwhile investment for us.


00 of my own money.


More Essential puerto rico trading partners prefer to trade the


hello I agree, a useful idea


Anastasja If you often look at a simple mathematical reference book, discussion on this topic could be avoided altogether. Just do not ask why in math :)


Zainal I think you're wrong. Escríbeme en PM, discuta.


djali Incredibly beautiful!


chubar It is a valuable phrase


djludens You are absolutely right. Esto es algo allí, y me gusta esta idea, estoy completamente de acuerdo con usted.


September 16, 2013 5:00 am 5 comments Views: 5275


The Fractal Adaptive Moving Average aka FRAMA is a particularly clever indicator. It uses the Fractal Dimension of stock prices to dynamically adjust its smoothing period. In this post we will reveal how the FRAMA performs and if it is worthy of being included in your trading arsenal.


To fully understand how the FRAMA works please read this post before continuing. You can also download a FREE spreadsheet containing a working FRAMA that will automatically adjust to the settings you specify. Find it at the following link near the bottom of the page under Downloads – Technical Indicators: Fractal Adaptive Moving Average (FRAMA). Please leave a comment and share this post if you find it useful.


The ‘Modified FRAMA’ that we tested consists of more than one variable. So before we can put it up against other Adaptive Moving Averages to compare their performance, we must first understand how the FRAMA behaves as its parameters are changed. From this information we can identify the best settings and use those settings when performing the comparison with other Moving Average Types.


Each FRAMA requires a setting be specified for the Fast Moving Average (FC), Slow Moving Average (SC) and the FRAMA period itself. We tested trades going Long and Short, using Daily and Weekly data, taking End Of Day (EOD) and End Of Week (EOW) signals


analyzing all combinations of:


FC = 1, 4, 10, 20, 40, 60


SC = 100, 150, 200, 250, 300


FRAMA = 10, 20, 40, 80, 126, 252


Part of the FRAMA calculation involves finding the slope of prices for the first half, second half and the entire length of the FRAMA period. For this reason the FRAMA periods we tested were selected due to being even numbers and the fact that they correspond with the approximate number of trading days in standard calendar periods: 10 days = 2 weeks, 20 days = 1 month, 40 days = 2 months, 80 days = ⅓ year, 126 days = ½ year and there are 252 trading days in an average year. A total of 920 different averages were tested and each one was run through 300 years of data across 16 different global indexes (details here ).


Daily vs Weekly Data – EOD vs EOW Signals


In our original MA test; Moving Averages – Simple vs. Exponential we revealed that once an EMA length was above 45 days, by using EOW signals instead of EOD signals you didn’t sacrifice returns but did benefit from a 50% jump in the probability of profit and double the average trade duration. To see if this was also the case with the FRAMA we compared the best returns produced by each signal type:


As you can see, for the FRAMA, Daily data with EOD signals produced by far the most profitable results and we will therefore focus on this data initially. It is presented below on charts split by FRAMA period with the test results on the “y” axis, the Fast MA (FC) on the “x” axis and a separate series displayed for each Slow MA (SC).


FRAMA Annualized Return – Day EOD Long


The first impressive thing about the results above is that every single Daily EOD Long average tested outperformed the buy and hold annualized return of 6.32%^ during the test period (before allowing for transaction costs and slippage). This is a strong vote of confidence for the FRAMA as an indicator.


You will also notice that the data series on each chart are all bunched together revealing that similar results are achieved despite the “SC” period ranging from 100 to 300 days. Changing the other parameters however makes a big difference and returns increase significantly once the FRAMA period is above 80 days. This indicates that the Fractal Dimension is not as useful if measured over short periods.


When the FRAMA period is short, returns increase as the “FC” period is extended. This is due to the Fractal Dimension being very volatile if measured over short periods and a longer “FC” dampening that volatility. Once the FRAMA period is 40 days or more the Fractal Dimension becomes less volatile and as a result, increasing the “FC” then causes returns to decline.


Overall the best annualized returns on the Long side of the market came from a FRAMA period of 126 days which is equivalent to about six months in the market, while a “FC” of just 1 to 4 days proved to be most effective. Assessing the results from the Short side of the market comes to the same conclusion although the returns were far lower: FRAMA Annualized Return – Short .


FRAMA Annualized Return During Exposure – Day EOD Long


The above charts show how productive each different Daily FRAMA EOD Long was while exposed to the market. Clearly the shorter FRAMA periods are far less productive and anything below 40 days is not worth bothering with. The 126 day FRAMA again produced the best returns with the optimal “FC” being 1 – 4 days. Returns for going short followed a similar pattern but as you would expect were far lower; FRAMA Annualized Return During Exposure – Short .


Moving forward we will focus in on the characteristics of the 126 Day FRAMA because it consistently produced superior returns.


FRAMA, EOD – Time in Market. Because the 16 markets used advanced at an average annualized rate of 6.32%^ during the test period it doesn’t come as a surprise that the majority of the market exposure was to the long side. By extending the “FC” it further increased the time exposed to the long side and reduced exposure on the short side. If the test period had consisted of a prolonged bear market the exposure results would probably be reversed.


FRAMA, EOD – Trade Duration. By increasing the “FC” period it also extends the average trade duration. Changing the “SC” makes little difference but as the “SC” is raised from 100 to 300 days the average trade duration does increase ever so slightly.


FRAMA, EOD – Probability of Profit. As you would expect, the probability of profit is higher on the long side which again is mostly a function of the global markets rising during the test period. However the key information revealed by the charts above is that the probability of profit decreases significantly as the “FC” is extended. This is another indication that the optimal FRAMA requires a short “FC” period.


The Best Daily EOD FRAMA Parameters. Our tests clearly show that a FRAMA period of 126 days will produce near optimal results. While for the “SC” we have shown that any setting between 100 and 300 days will produce a similar outcome. The “FC” period on the other hand must be short; 4 days or less. John Ehlers’ original FRAMA had a “FC” of 1 and a “SC” of 198; this will produce fantastic results without the need for any modification. Because we prefer to trade as infrequently as possible we have selected a “FC” of 4 and a “SC” of 300 as the best parameters because these settings results in a longer average trade duration while still producing great returns on both the Long and Short side of the market. FRAMA, EOD – Long. Above you can see how the 126 Day FRAMA with a “FC” of 4 and a “SC” of 300 has performed since 1991 compared to an equally weighted global average of the tested markets. I have included the performance of the 75 Day EMA, EOW becuase it was the best performing exponential moving average from our original tests. This clearly illustrates that the Fractal Adaptive Moving Average is superior to a standard Exponential Moving Average. The FRAMA is far more active however producing over 5 times as many trades and did suffer greater declines during the 2008 bear market.


On the Short side of the market the FRAMA further proves its effectiveness. Without needing to change any parameters the 126 Day FRAMA, EOD 4, 300 remains a top performer. When we ran our original tests on the EMA we found a faster average worked best for going short and that the 25 Day EMA was particularly effective. But as you can see on the chart above the FRAMA outperforms again.


What is particularly note worthy is that the annualized return during the 27% of the time that this FRAMA was short the market was 6.64% which is greater than the global average annualized return of 6.32%.


See the results for the 126 Day FRAMA, EOD 4, 300


126 Day FRAMA, EOD 4, 300 – Smoothing Period Distribution. With a standard EMA the smoothing period is constant; if you have a 75 day EMA then the smoothing period is 75 days no matter what. The FRAMA on the other hand is adaptive so the smoothing period is constantly changing. But how is the smoothing distributed? Does it follow a bell curve between the “FC” and “SC”, is it random or is it localized around a few values. To reveal the answer we charted the percentage that each smoothing period occurred across the 300 years of test data. The chart above came as quite a surprise. It reveals that despite a “FC” to “SC” range of 4 to 300 days, 72% of the smoothing was within a 4 to 50 day range and the majority of it was only 5 to 8 days. This explains why changing the “SC” has little impact and why changing the “FC” makes all the difference. It also explains why the FRAMA does not perform well when using EOW signals, as an EMA must be over 45 days in duration before EOW signals can be used without sacrificing returns.


A Slower FRAMA


We have identified that the FRAMA is a very effective indicator but the best parameters (126 Day FRAMA, EOD 4, 300 Long) result in a very quick average that in your tests had an typical trade duration of just 14 days. We also know that the 75 Day EMA, EOW Long is an effective yet slower moving average and in our tests had a typical trade duration of 74 days.


A good slow moving average can be a useful component in any trading system because it can be used to confirm the signals from other more active indicators. So we looked through the FRAMA test results again in search a less active average that is a better alternative to the 75 Day EMA and this is what we found:


The 252 Day FRAMA, EOW 40, 250 Long produces some impressive results and does out perform the 75 Day EMA, EOW Long by a fraction. However this fractional improvement is in almost every measure including the performance on the short side. The only draw back is a slight decrease in the average trade duration from 74 days to 63 when long. As a result the 252 Day FRAMA, EOW 40, 250 has knocked the 75 Day EMA, EOW out of the Technical Indicator Fight for Supremacy .


See the results for the 252 Day FRAMA, EOW 40, 250 Long and Short on each of the 16 markets tested.


252 Day FRAMA, EOW 40, 250 – Smoothing Period Distribution


FRAMA Testing – Conclusion


The FRAMA is astoundingly effective as both a fast and a slow moving average and will outperform any SMA or EMA. We selected a modified FRAMA with a “FC” of 4, a “SC” of 300 and a “FRAMA” period of 126 as being the most effective fast FRAMA although the settings for a standard FRAMA will also produce excellent results. For a slower or longer term average the best results are likely to come from a “FC” of 40, a “SC” of 250 and a “FRAMA” period of 252.


Robert Colby in his book ‘The Encyclopedia of Technical Market Indicators’ concluded, “Although the adaptive moving average is an interesting newer idea with considerable intellectual appeal, our preliminary tests fail to show any real practical advantage to this more complex trend smoothing method.” Well Mr Colby, our research into the FRAMA is in direct contrast to your findings.


It will be interesting to see if any of the other Adaptive Moving Averages can produce better returns. We will post the results HERE as they become available. Well done John Ehlers you have created another exceptional indicator!


Performance of the Periodogram


The following sections discuss the performance of the periodogram with regard to the issues of leakage. resolution. bias. and variance .


Spectral Leakage. Consider the power spectrum or PSD of a finite-length signal x L [ n ], as discussed in The Periodogram. It is frequently useful to interpret x L [ n ] as the result of multiplying an infinite signal, x [ n ], by a finite-length rectangular window, w R [ n ].


Because multiplication in the time domain corresponds to convolution in the frequency domain, the Fourier transform of the expression above is


The expression developed earlier for the periodogram,


illustrates that the periodogram is also influenced by this convolution.


The effect of the convolution is best understood for sinusoidal data. Suppose that x [ n ] is composed of a sum of M complex sinusoids.


Its spectrum is


which for a finite-length sequence becomes


So in the spectrum of the finite-length signal, the Dirac deltas have been replaced by terms of the form , which corresponds to the frequency response of a rectangular window centered on the frequency f k .


The frequency response of a rectangular window has the shape of a sinc signal, as shown below.


The plot displays a main lobe and several side lobes, the largest of which is approximately 13.5 dB below the mainlobe peak. These lobes account for the effect known as spectral leakage . While the infinite-length signal has its power concentrated exactly at the discrete frequency points f k . the windowed (or truncated) signal has a continuum of power "leaked" around the discrete frequency points f k .


Because the frequency response of a short rectangular window is a much poorer approximation to the Dirac delta function than that of a longer window, spectral leakage is especially evident when data records are short. Consider the following sequence of 100 samples.


It is important to note that the effect of spectral leakage is contingent solely on the length of the data record. It is not a consequence of the fact that the periodogram is computed at a finite number of frequency samples.


Resolution. Resolution refers to the ability to discriminate spectral features, and is a key concept on the analysis of spectral estimator performance.


In order to resolve two sinusoids that are relatively close together in frequency, it is necessary for the difference between the two frequencies to be greater than the width of the mainlobe of the leaked spectra for either one of these sinusoids. The mainlobe width is defined to be the width of the mainlobe at the point where the power is half the peak mainlobe power (i. e. 3 dB width). This width is approximately equal to f s / L .


In other words, for two sinusoids of frequencies f 1 and f 2 . the resolvability condition requires that


In the example above, where two sinusoids are separated by only 10 Hz, the data record must be greater than 100 samples to allow resolution of two distinct sinusoids by a periodogram.


Consider a case where this criterion is not met, as for the sequence of 67 samples below.


The above discussion about resolution did not consider the effects of noise since the signal-to-noise ratio (SNR) has been relatively high thus far. When the SNR is low, true spectral features are much harder to distinguish, and noise artifacts appear in spectral estimates based on the periodogram. The example below illustrates this.


Bias of the Periodogram. The periodogram is a biased estimator of the PSD. Its expected value can be shown to be


which is similar to the first expression for X L ( f ) in Spectral Leakage. except that the expression here is in terms of average power rather than magnitude. This suggests that the estimates produced by the periodogram correspond to a leaky PSD rather than the true PSD.


essentially yields a triangular Bartlett window (which is apparent from the fact that the convolution of two rectangular pulses is a triangular pulse). This results in a height for the largest sidelobes of the leaky power spectra that is about 27 dB below the mainlobe peak; i. e. about twice the frequency separation relative to the non-squared rectangular window.


The periodogram is asymptotically unbiased, which is evident from the earlier observation that as the data record length tends to infinity, the frequency response of the rectangular window more closely approximates the Dirac delta function (also true for a Bartlett window). However, in some cases the periodogram is a poor estimator of the PSD even when the data record is long. This is due to the variance of the periodogram, as explained below.


Variance of the Periodogram. The variance of the periodogram can be shown to be approximately


which indicates that the variance does not tend to zero as the data length L tends to infinity. In statistical terms, the periodogram is not a consistent estimator of the PSD. Nevertheless, the periodogram can be a useful tool for spectral estimation in situations where the SNR is high, and especially if the data record is long.


The Modified Periodogram


The modified periodogram windows the time-domain signal prior to computing the FFT in order to smooth the edges of the signal. This has the effect of reducing the height of the sidelobes or spectral leakage. This phenomenon gives rise to the interpretation of sidelobes as spurious frequencies introduced into the signal by the abrupt truncation that occurs when a rectangular window is used. For nonrectangular windows, the end points of the truncated signal are attenuated smoothly, and hence the spurious frequencies introduced are much less severe. On the other hand, nonrectangular windows also broaden the mainlobe, which results in a net reduction of resolution.


The periodogram function allows you to compute a modified periodogram by specifying the window to be used on the data. For example, compare a rectangular window and a Hamming window.


You can verify that although the sidelobes are much less evident in the Hamming-windowed periodogram, the two main peaks are wider. In fact, the 3 dB width of the mainlobe corresponding to a Hamming window is approximately twice that of a rectangular window. Hence, for a fixed data length, the PSD resolution attainable with a Hamming window is approximately half that attainable with a rectangular window. The competing interests of mainlobe width and sidelobe height can be resolved to some extent by using variable windows such as the Kaiser window.


Nonrectangular windowing affects the average power of a signal because some of the time samples are attenuated when multiplied by the window. To compensate for this, the periodogram function normalizes the window to have an average power of unity. This way the choice of window does not affect the average power of the signal.


The modified periodogram estimate of the PSD is


where U is the window normalization constant


which is independent of the choice of window. The addition of U as a normalization constant ensures that the modified periodogram is asymptotically unbiased.


An improved estimator of the PSD is the one proposed by Welch [8]. The method consists of dividing the time series data into (possibly overlapping) segments, computing a modified periodogram of each segment, and then averaging the PSD estimates. The result is Welch's PSD estimate.


Welch's method is implemented in the Signal Processing Toolbox by the pwelch function. By default, the data is divided into eight segments with 50% overlap between them. A Hamming window is used to compute the modified periodogram of each segment.


The averaging of modified periodograms tends to decrease the variance of the estimate relative to a single periodogram estimate of the entire data record. Although overlap between segments tends to introduce redundant information, this effect is diminished by the use of a nonrectangular window, which reduces the importance or weight given to the end samples of segments (the samples that overlap).


However, as mentioned above, the combined use of short data records and nonrectangular windows results in reduced resolution of the estimator. In summary, there is a tradeoff between variance reduction and resolution. One can manipulate the parameters in Welch's method to obtain improved estimates relative to the periodogram, especially when the SNR is low. This is illustrated in the following example.


Consider an original signal consisting of 301 samples.


We can obtain Welch's spectral estimate for 3 segments with 50% overlap with


In the periodogram above, noise and the leakage make one of the sinusoids essentially indistinguishable from the artificial peaks. In contrast, although the PSD produced by Welch's method has wider peaks, you can still distinguish the two sinusoids, which stand out from the "noise floor."


However, if we try to reduce the variance further, the loss of resolution causes one of the sinusoids to be lost altogether.


For a more detailed discussion of Welch's method of PSD estimation, see Kay [2] and Welch [8] .


Bias and Normalization in Welch's Method


Welch's method yields a biased estimator of the PSD. The expected value can be found to be


where L s is the length of the data segments and U is the same normalization constant present in the definition of the modified periodogram. As is the case for all periodograms, Welch's estimator is asymptotically unbiased. For a fixed length data record, the bias of Welch's estimate is larger than that of the periodogram because L s < L .


The variance of Welch's estimator is difficult to compute because it depends on both the window used and the amount of overlap between segments. Basically, the variance is inversely proportional to the number of segments whose modified periodograms are being averaged.


The periodogram can be interpreted as filtering a length L signal, x L [ n ], through a filter bank (a set of filters in parallel) of L FIR bandpass filters. The 3 dB bandwidth of each of these bandpass filters can be shown to be approximately equal to f s / L . The magnitude response of each one of these bandpass filters resembles that of the rectangular window discussed in Spectral Leakage. The periodogram can thus be viewed as a computation of the power of each filtered signal (i. e. the output of each bandpass filter) that uses just one sample of each filtered signal and assumes that the PSD of x L [ n ] is constant over the bandwidth of each bandpass filter.


As the length of the signal increases, the bandwidth of each bandpass filter decreases, making it a more selective filter, and improving the approximation of constant PSD over the bandwidth of the filter. This provides another interpretation of why the PSD estimate of the periodogram improves as the length of the signal increases. However, there are two factors apparent from this standpoint that compromise the accuracy of the periodogram estimate. First, the rectangular window yields a poor bandpass filter. Second, the computation of the power at the output of each bandpass filter relies on a single sample of the output signal, producing a very crude approximation.


Welch's method can be given a similar interpretation in terms of a filter bank. In Welch's implementation, several samples are used to compute the output power, resulting in reduced variance of the estimate. On the other hand, the bandwidth of each bandpass filter is larger than that corresponding to the periodogram method, which results in a loss of resolution. The filter bank model thus provides a new interpretation of the compromise between variance and resolution.


Thompson's multitaper method (MTM) builds on these results to provide an improved PSD estimate. Instead of using bandpass filters that are essentially rectangular windows (as in the periodogram method), the MTM method uses a bank of optimal bandpass filters to compute the estimate. These optimal FIR filters are derived from a set of sequences known as discrete prolate spheroidal sequences (DPSSs, also known as Slepian sequences ).


In addition, the MTM method provides a time-bandwidth parameter with which to balance the variance and resolution. This parameter is given by the time-bandwidth product, NW and it is directly related to the number of tapers used to compute the spectrum. There are always 2 * NW -1 tapers used to form the estimate. This means that, as NW increases, there are more estimates of the power spectrum, and the variance of the estimate decreases. However, the bandwidth of each taper is also proportional to NW . so as NW increases, each estimate exhibits more spectral leakage (i. e. wider peaks) and the overall spectral estimate is more biased. For each data set, there is usually a value for NW that allows an optimal trade-off between bias and variance.


The Signal Processing Toolbox function that implements the MTM method is called pmtm. Use pmtm to compute the PSD of xn from the previous examples.


By lowering the time-bandwidth product, you can increase the resolution at the expense of larger variance.


Note that the average power is conserved in both cases.


This method is more computationally expensive than Welch's method due to the cost of computing the discrete prolate spheroidal sequences. For long data series (10,000 points or more), it is useful to compute the DPSSs once and save them in a MAT-file. The M-files dpsssave. dpssload. dpssdir. and dpssclear are provided to keep a database of saved DPSSs in the MAT-file dpss. mat .


Cross-Spectral Density Function


The PSD is a special case of the cross spectral density (CSD) function, defined between two signals x n and y n as


As is the case for the correlation and covariance sequences, the toolbox estimates the PSD and CSD because signal lengths are finite.


To estimate the cross-spectral density of two equal length signals x and y using Welch's method, the csd function forms the periodogram as the product of the FFT of x and the conjugate of the FFT of y. Unlike the real-valued PSD, the CSD is a complex function. csd handles the sectioning and windowing of x and y in the same way as the pwelch function.


You can compute confidence intervals using the csd function by including an additional input argument p that specifies the percentage of the confidence interval, and setting the numoverlap argument to 0.


p must be a scalar between 0 and 1. This function assumes chi-squared distributed periodograms of the nonoverlapping sections of windowed data in computing the confidence intervals. This assumption is valid when the signal is a Gaussian distributed random process. Provided these assumptions are correct, the confidence interval


covers the true CSD with probability p. If you set numoverlap to any value other than 0. you generate a warning indicating that the sections overlap and the confidence interval is not reliable.


Transfer Function Estimate


One application of Welch's method is nonparametric system identification. Assume that H is a linear, time invariant system, and x ( n ) and y ( n ) are the input to and output of H . respectivamente. Then the power spectrum of x ( n ) is related to the CSD of x ( n ) and y ( n ) by


An estimate of the transfer function between x ( n ) and y ( n ) is


This method estimates both magnitude and phase information. The tfe function uses Welch's method to compute the CSD and power spectrum, and then forms their quotient for the transfer function estimate. Use tfe the same way that you use the csd function.


Filter the signal xn with an FIR filter, then plot the actual magnitude response and the estimated response.


The magnitude-squared coherence between two signals x ( n ) and y ( n ) is


This quotient is a real number between 0 and 1 that measures the correlation between x ( n ) and y ( n ) at the frequency .


The cohere function takes sequences x and y. computes their power spectra and CSD, and returns the quotient of the magnitude squared of the CSD and the product of the power spectra. Its options and operation are similar to the csd and tfe functions.


The coherence function of xn and the filter output yn versus frequency is


If the input sequence length nfft. window length window. and the number of overlapping data points in a window numoverlap, are such that cohere operates on only a single record, the function returns all ones. This is because the coherence function for linearly dependent data is one.


EWMA Chart in Excel


Use the EWMA Chart when you have one sample and want to detect small shifts in performance.


The EWMA (exponentially weighted moving average) chart's performance is similar to the Cusum chart.


Example of an EWMA Chart created in the QI Macros for Excel


To create an EWMA control chart within the QI Macros:


Highlight your data and select "EWMA" from the "Control Charts (SPC)" drop-down menu (we offer an EWMA fill-in-the-blank template, as well).


Once selected, you will be prompted to either accept the default alpha parameter of 0.2 or enter in your own:


Per Montgomery 4th Edition, “values of О» in the interval 0.05 ≤ О» ≤ work well in practice, with О» = 0.05, О» = 0.10, and О» = 0.20 being popular choices. A good rule of thumb is to use smaller values of О» to detect smaller shifts.”


After you have created your chart, you can update/edit your alpha parameter, under the "Obs 1 Data" tab, in the "Weight" cell:


Nota . The lower the value of the alpha parameter, the closer your UCL and LCL will be to the CL; and vice versa.


Aprende más.


To create an Ewma Chart using QI Macros.


Gui Technical Analysis Tool


Instructions: 1. Give the symbol of the stock. 2. Give today's date in the specific format (months-days-year). 3. 'GET DATA' button fetches the data from Yahoo server. 4. Choose the number of days you want to examine. 5. Pick the fast and slow averages used by the functions (remember fast has to be smaller than slow). 6. Press the 'RESULTS' button to obtain the plots: Bollinger, simple moving average, square root weighted moving average, linear moving average, square weighted moving average and exponential moving average. 7. You can update the results using the days/fast/slow options and re-press 'RESULTS'. Requirements: · MATLAB Release: R14


Related Scripts


Technical Analysis Tool This UI driven application allows users to - load time series data from several sources (yahoo, MATLAB, etc. ) depending on which MATLAB toolboxes the.


Flow Graph Analysis Tool Gui A GUI front-end for Flowgraph Analysis Tool. It's an editor for textual nodelist files. You can create, edit, save and run these files. Flowgraph anal.


Jiro s Statistics System 1.0 Who visited your homepage, when did they visit and where are they coming from? JiRo´s Statistics System logs interesting information from your v.


Techanaltool Financial Technical Analysis Toolbox.


Techtradetool In the age of computerized trading, financial services companies and independent traders must quickly develop and deploy dynamic technical trading sys.


Ta-lib As Mex The Technical Analysis library. API Documentation available at tadoc. org Requirements:· MATLAB Release: R2006b.


A Fully Automated Flowgraph Analysis Tool For Matl A tool for generating one or several transfer functions for a given system. Applicable for both continuous - and discrete-time systems. System can.


Space Truss Systems As Linear Static Analysis In this finite element application is descriptioned simple space a struss system analysis with only linear static model. This analysis model is very s.


Sum Cms 1.3 For developers looking for a small CMS, something to quickly install and get their website off and running, Sum CMS is a great tool to look at. With a.


Http Upload Tool In Php 1.0 PHP Upload Tool provides a simple file management web interface. The motivation was to create a drop-box for users to be able to upload files similar.


Shortcut State Space Circuit Analysis Using state space (SS) analysis, the circuit dc, ac, and transient response can be obtained from the same initial analysis. However, conventional SS m.


Nelinsys The main objective of the tool is to provide better program support for design and simulation of nonlinear control systems in MATLAB/Simulink environm.


Phplog Analyzer 0.3 Php Log Analyzer (aka PLA) is a Log Analysis tool for Apache. There are lots of log analyzer softwares available on the internet but most of them have.


Colea COLEA is a Matlab Speech Processing Toolkit with a graphical user interface. This program can be used to edit speech waveforms (cut, copy or paste sele.


$how Me The Money. 0.1 This tool is quick and easy to use and was developed with PHP. It can help you analyze potential investment property & calculate future profitabi.


Dacota Dacota is aiming to be a fully functional Ruby on Rails stock trading resource. Its prime function is modular and aims to provide extensive capability.


Hnm-speech Analysis/synthsis Model The following zip file contains two routines for analysis/synthesis of HNM. HNM is a analysis/synthesis model of speech, like classical LPC model. Due t.


Backlinkanalyzer Tool 1.0.2a Linktool is a free tool for backlink analysis and organizing your linklists in an easy way. Features:-Backlink Analyzer(google/yahoo based) - CSV URL Li.


Decision Analysis Decision Analysis is an easily-extensible expert system to help users make decisions of all types. Written entirely in Python, Decision Analysis, at t.


Style Analysis MATLAB code for rolling style analysis in portfolio performance analysis. Requirements:· MATLAB Release: R13· Optimization Toolbox.


arXiv. org > q-fin > arXiv:1103.2577


Quantitative Finance > Statistical Finance


Title: Multifractal detrending moving average cross-correlation analysis


(Submitted on 14 Mar 2011 (v1 ), last revised 18 Mar 2011 (this version, v2))


Abstract: There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross-correlations. The multifractal detrended cross-correlation analysis (MF-DCCA) approaches can be used to quantify such cross-correlations, such as the MF-DCCA based on detrended fluctuation analysis (MF-X-DFA) method. We develop in this work a class of MF-DCCA algorithms based on the detrending moving average analysis, called MF-X-DMA. The performances of the MF-X-DMA algorithms are compared with the MF-X-DFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving average processes and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents $h_ $ extracted from the MF-X-DMA and MF-X-DFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross-correlation is independent of the cross-correlation coefficient between two time series and the MF-X-DFA and centered MF-X-DMA algorithms have comparative performance, which outperform the forward and backward MF-X-DMA algorithms. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MF-X-DMA algorithm gives the best estimates of $h_ (q)$ since its $h_ (2)$ is closest to 0.5 as expected, and the MF-X-DFA algorithm has the second best performance. For the volatilities, the forward and backward MF-X-DMA algorithms give similar results, while the centered MF-X-DMA and the MF-X-DFA algorithms fails to extract rational multifractal nature.


15 pages, 4 figures, 2 matlab codes for MF-X-DMA and MF-X-DFA


Many popular quantitative trading strategies are public for quite a while. Now, if you like to utilize such a strategy with real money, you must make sure that your strategy performs well. For simple strategies, MS Excel is perfect for this task. But, since we would like to use an optimization and a specific visualization later, we use Theta Suite and Matlab. This also allows the analysis of more complex strategies if you like.


Setting up a quantitative trading strategy: MACD – señal


One of the most popular technical indicators is the Moving Average Convergence/Divergence ( MACD), which essentially the difference between two moving averages. The literature says, the zero crossing of an MACD line would give a good indication for buying of selling stock. Sometime, they add some trigger signal and claim, this would be even better. Let’s see if this is true.


More precisely trading MACD is usually defined as


resp. in a loop over time this looks like


where EMA_12 and EMA_26 are two different exponential moving averages with constants “const_l=12” and “”const_l=26”. The EMA is defined as:


The appropriate trading system with a signal period of “const_l = 9” parece


Testing the strategy with real historical data


This part is very important. I cannot stress this fact too much: in a later post, we will talk about back-testing much more.


Get Data


Assigning this data to a ThetaML process via


allows the estimation of the performance of the MACD based trading strategy. Here is a graph of IBM stock prices from 2000-01-01 to 2011-12-31:


Matlab plot of IBM stock price


Backtesting the MACD trading strategy


We can run the above ThetaML models using the Theta Suite Orchestrator and connect it with the historical IBM data in Matlab in the Configurator. Then, in the Result Explorer, we get the performance of the corresponding MACD-signal trading strategy without short selling


Plot of performance of MACD trading strategy


and with short selling, it looks like


Performance of MACD trading strategy with short selling


Note that during most years, the MACD-signal strategy does not perform better than the underlying itself. Taking transaction costs into account, this looks even worse. Interestingly, the year 2000 delivered a great performance of the MACD strategy, but all later years did not perform that well.


Conclusión


It is easy to verify if a strategy would have performed well using historical data. ThetaML and Matlab are excellent tools for this task. The MACD-based trading strategy we analyzed is not significantly better than holding the underlying itself. Other parameters of the trading strategy might lead to better results, so we can perform an optimization. We will see that next week.


Averages/Mean angle


Averages/Mean angle You are encouraged to solve this task according to the task description, using any language you may know.


When calculating the average or mean of an angle one has to take into account how angles wrap around so that any angle in degrees plus any integer multiple of 360 degrees is a measure of the same angle.


If one wanted an average direction of the wind over two readings where the first reading was of 350 degrees and the second was of 10 degrees then the average of the numbers is 180 degrees, whereas if you can note that 350 degrees is equivalent to -10 degrees and so you have two readings at 10 degrees either side of zero degrees leading to a more fitting mean angle of zero degrees.


To calculate the mean angle of several angles:


Assume all angles are on the unit circle and convert them to complex numbers expressed in real and imaginary form.


Compute the mean of the complex numbers.


Convert the complex mean to polar coordinates whereupon the phase of the complex mean is the required angular mean.


(Note that, since the mean is the sum divided by the number of numbers, and division by a positive real number does not affect the angle, you can also simply compute the sum for step 2.)


You can alternatively use this formula:


Given the angles α 1. …. α n ,\dots ,\alpha _ > the mean is computed by α ¯ = atan2 ⁡ ( 1 n ⋅ ∑ j = 1 n sin ⁡ α j. 1 n ⋅ ∑ j = 1 n cos ⁡ α j ) >=\operatorname \left( >\cdot \sum _ ^ \sin \alpha _ , >\cdot \sum _ ^ \cos \alpha _ \right)>


write a function/method/subroutine/. that given a list of angles in degrees returns their mean angle. (You should use a built-in function if you have one that does this for degrees or radians).


Use the function to compute the means of these lists of angles (in degrees): [350, 10], [90, 180, 270, 360], [10, 20, 30]; and show your output here.


See Also


Averages/Mean time of day


Contenido


An implementation based on the formula using the "Arctan" (atan2) function, thus avoiding complex numbers:


arXiv. org > q-fin > arXiv:1005.0877


Quantitative Finance > Statistical Finance


Title: Detrending moving average algorithm for multifractals


(Submitted on 6 May 2010 (v1 ), last revised 8 Jun 2010 (this version, v2))


Abstract: The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of non-stationary time series and the long-range correlations of fractal surfaces, which contains a parameter $\theta$ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward ($\theta=0$), centered ($\theta=0.5$), and forward ($\theta=1$) detrending windows. We find that the estimated multifractal scaling exponent $\tau(q)$ and the singularity spectrum $f(\alpha)$ are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis (MFDFA). The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.


13 pages, 3 figures, 2 tables. We provide the MATLAB codes for the one-dimensional and two-dimensional MFDMA Algorithms


Statistical Finance (q-fin. ST) ; Computational Physics (physics. comp-ph); Data Analysis, Statistics and Probability (physics. data-an); Portfolio Management (q-fin. PM)


Excel Data Analysis: Forecasting


Professor Wayne Winston has taught advanced forecasting techniques to Fortune 500 companies for more than twenty years. In this course, he shows how to use Excel's data-analysis tools—including charts, formulas, and functions—to create accurate and insightful forecasts. Learn how to display time-series data visually; make sure your forecasts are accurate, by computing for errors and bias; use trendlines to identify trends and outlier data; model growth; account for seasonality; and identify unknown variables, with multiple regression analysis. A series of practice challenges along the way helps you test your skills and compare your work to Wayne's solutions.


This course qualifies for 3 Category A professional development units (PDUs) through lynda. com, PMI Registered Education Provider #4101.


Topics include:


Plotting and displaying time-series data


Creating a moving average chart


Accounting for errors and bias


Using and interpreting trendlines


Modeling exponential growth


Calculating compound annual growth rate (CAGR)


Analyzing the impact of seasonality


Introducing the ratio-to-moving-average method


Forecasting with multiple regression


Bienvenido


Hi, I'm Wayne Winston. In this course I'll be showing you how to create and evaluate forecasts. We'll start by exploring the nature of time series data with scatter plots and moving average plots. Then we'll examine data bias and acuuracy using methods including mean absolute deviation and sum of squared errors. Next we'll try out trend lines for forecasting. Then we'll model exponential growth and compute CAGRs, or compound annual growth rates. And finally, I'll help you understand seasonality of data and how to forecast with multiple regression.


So roll up your sleeves and let's get ready to learn a lot about forecasting.


There are currently no FAQs about Excel Data Analysis: Forecasting.


Like This


Unlike


bigjoepops 01 Jul 2014


I am trying to create a code section that will take a 1D array and create a moving average array. Sorry if this is a bad description. I want to take x elements of the input array, average them, and put that average in the first element of a new array. Then take the next x elements, average them, and put them as the second element of the new array. I want this done until the array is empty.


I have two possible ways to do it, but neither are running as fast as I wanted them to. I want to see if anyone knows of a faster way to conduct this averaging.


Miniaturas adjuntas


Like This


Unlike


GregSands 01 Jul 2014


That's not quite a moving average, rather a down-sampling - i. e. Filtered Array is shorter than Input Data. Your first solution is pretty good - just speed it up with a Parallel For Loop. An alternative is to reshape into a 2D array. Both these come out roughly the same speed, about 5x faster on my machine than your solutions above.


If you want a true Moving Average (where the result is the same length as the original) I think this suggestion from the NI forums using an FIR filter is nice and simple, although you might look carefully at the first Num values if that's important.


Like This


Unlike


ThomasGutzler 02 Jul 2014


You have to be careful with the FIR filter the way you're using it because it applies a shift to your data by the amount of "Num of Averages".


To fix that, you have to do some padding at both ends of your input data. In this example I just repeat the first and last value:


If you run it through a graph it becomes obvious:


Like This


Unlike


Tim_S 02 Jul 2014


Hadn't thought of using a FIR filter. I bench marked a mean method and your FIR method. The FIR method was


8-9x faster on my system than the mean with standard for loop, and the FIR was


2-3x faster with the mean method for loop set for parallelism.


Like This


Unlike


Gary Rubin 02 Jul 2014


I don't have LabVIEW installed on this machine and I may be confusing LabVIEW and MATLAB primatives, but I've seen a pretty fast approach that relies on a cumulative summation. Textually, the algorithm is:


Adjust indices as necessary to center your window in the right place and be careful with the edge conditions.


Like This


Unlike


bigjoepops 16 Jul 2014


Gracias por tu ayuda. I was able to parallelize the for loop with the mean in it and it helped speed it up. I didn't try the FIR approach. I'm still waiting on verification that the averaging will work for the customer.


I recently had to perform a block average on an image to reduce its size for processing in MATLAB and learned about a useful function called blkproc or blockproc (in the newer versions) in the Image Processing Toolbox. Block averaging is a process by which you average non-overlapping blocks of an image, which becomes a single pixel in the block averaged image. The standard image resize functions use a filter to resize the image and does not allow the blocks to be non-overlapping.


MATLAB's blkproc or blockproc functions facilitate this by making it easy for you to specify a function to be applied to blocks in the image. If you have an image img and want to do a M x N block average (i. e. the pixels of the resultant image are the average of M x N blocks in the original), you can use:


In newer versions of MATLAB, blockproc is preferred and is used this way:


mean2 is an Image Processing Toolbox function that takes a 2D array and returns the mean of all values. The distinction between blkproc usage and blockproc usage is that with blockproc. the function that it expects in place of mean2 takes a block struct structure instead of an array. Hence, we need to define a new inline function fun which provides the right interface by wrapping the mean2 function.


What blkproc / blockproc does is divides the image up into M x N blocks and feeds each one to mean2 and then takes the result and puts it into a single pixel in the new image.


You can replace mean2 with other functions that take an M x N array and returns a M' x N' array (where M' and N' are arbitrary numbers; for mean2 this is always 1 x 1) and the result will be constructed by tiling the M' x N' arrays in the order that the M x N blocks occur in the original image. The figure below shows what happens:


The original image is 20 x 20 pixels. Setting M = 4 and N = 4 and using mean2 (which outputs M' = 1 and N' = 1), the final image is 5 x 5 pixels.


When the image cannot be divided up into an integer number of M x N blocks, you will get border effects as the blkproc function pads images out by zeros. The newer version blockproc allows you to specify how you want to treat the partial blocks, either process them as is or pad them with zeros, with the 'PadPartialBlocks' parameter. See the MATLAB documentation for details.


The MATLAB documentation does not indicate the actual order of processing of the blocks (i. e. the actual sequence by which the blocks are processed). If the blkproc or blockproc implementation is parallel or multithreaded, there will not be any guaranteed order at all.


I believe that the blkproc function is single threaded on a single core computer, so the order of processing should be simple. To find out, I used the following test function, which does nothing except print the value of each image block pased into it. When I use this function with blkproc. I will use a block size of 1 x 1 so it will only print one value each time the function is executed.


I then constructed a matrix of values that I can easily interpret:


Finally, I ran blkproc with my test function testblkproc using a block size of 1 x 1. This makes it output the order in which the 1 x 1 blocks are procesed:


According to this test, blkproc processes entire rows first before moving onto the next row. I should re-iterate that this is only applicable to the blkproc function running on a single core computer.


fatin. 2010/09/25 14:05


I would like to partition an image to 16 parts, then working with each part as image. I mean, I will get 16 images instead of one image.


how can I partition the image to 16 region? how can I save each region as image. I am using Matlab


Peter Yu. 2010/09/27 01:36


I haven't tried this but maybe write your own custom block function (function fun) so that it will save the blocks out as images files. It should work since the block functions are just normal functions.


Ashraf Suyyagh. 2011/05/01 04:56


What is the order of block processing? Vertically, horizontally or Random?


Peter Yu. 2011/05/01 14:33


The order is not given in the MATLAB docs. You will have to code a test function to see what happens. I would expect it depends on whether the blockproc implementation is multithreaded in your version of MATLAB, how many cores you have, etc. I updated the page with one such function that you can use and tested it with a single threaded blkproc.


soumya. 2011/11/12 01:56


sir , I have a 256x256 image and I want to divide into blocks each having 8x8 pixels. So I will get total 1024 blocks. Now I want to process each blocks manually is completlly possible with blkproc. Can u help me, I visited many sites but none of them are responding


kiran. 2012/02/28 03:01


I=imread('imagefile. extension'); [r c]=size(I); bs=8; % Block Size (8x8)


nob=(r/bs)*(c/bs); % Total number of 8x8 Blocks


% Dividing the image into 8x8 Blocks kk=0; for i=1:(r/bs) for j=1:(c/bs) Block(. kk+j)=I((bs*(i-1)+1:bs*(i-1)+bs),(bs*(j-1)+1:bs*(j-1)+bs)); end kk=kk+(r/bs); fin


% Accessing individual Blocks


figure;imshow(Block(. 1)) % This shows u the fist 8x8 Block in a figure window figure;imshow(Block(. 2)) % This shows u the second 8x8 Block (i. e as per my %coding rows from 1:8 and col from 9:16) in a figure window and so on.


Samta. 2013/01/07 07:44


About Peter Yu I am a research and development professional with expertise in the areas of image processing, remote sensing and computer vision. I received BASc and MASc degrees in Systems Design Engineering at the University of Waterloo. My working experience covers industries ranging from district energy to medical imaging to cinematic visual effects. I like to dabble in 3D artwork, I enjoy cycling recreationally and I am interested in sustainable technology. Más sobre mí.


Feel free to contact me with any questions about this site at [user]@[host] where [user]= web and [host]= peteryu. ca


Copyright y copia; 1997 - 2016 Peter Yu


Instructions for Downloading/Extracting Matlab Files


Please select from the following list your preferred form for downloading. Then save the downloaded file to the directory of your choice, and follow the instructions for "unpacking" it.


Self-extracting zip file for Windows (25K)


Save with. exe extension.


Run this file (double click on its name).


Save the extracted files in the directory of your choice.


Download this file now .


Zipped file for Windows -- requires a zip/unzip program (2K)


Save with. zip extension.


Run your Windows zip program to unzip the archive.


Save the extracted files in the directory of your choice.


Download this file now .


Tarred file for Unix/Linux (10K)


Save the file leastsq. tar.


In your command window (not the Matlab window), cd to the directory where you saved the file, and enter the command tar xvfp leastsq. tar


Download this file now .


Zipped tar file for Unix/Linux (1K)


Save the file leastsq. tar. gz.


In your command window (not the Matlab window), cd to the directory where you saved the file, and enter the command gunzip leastsq. tar. gz


Then enter the command tar xvfp leastsq. tar


Download this file now .


Self-extracting archive for Macintosh (34K)


Save the file leastsq. sea. hqx.


Use the utility StuffIt Expander to extract the files. Or use the utility BixHex to create leastsq. sea, and then run this application to extract the files.


Download this file now .


After you have saved your. m files, do the following:


If you saved your files in a directory that is not already in Matlab's path, use the addpath command to add your directory to the Matlab path.


Open a diary file in Matlab in order to save your work.


Open the first file for this module by typing on the Matlab command line:


Start Part 1 of the module by clicking the Forward button (or, if you prefer, return to Contents by clicking the Back button).


As others have mentioned, you should consider a IIR (infinite impulse response) filter rather than the FIR (finite impulse response) filter you are using now. There is more to it, but at first glance FIR filters are implemented as explicit convolutions and IIR filters with equations.


The particular IIR filter I use a lot in microcontrollers is a single pole low pass filter. This is the digital equivalent of a simple R-C analog filter. For most applications, these will have better characteristics than the box filter that you are using. Most uses of a box filter that I have encountered are a result of someone not paying attention in digital signal processing class, not as a result of needing their particular characteristics. If you just want to attenuate high frequencies that you know are noise, a single pole low pass filter is better. The best way to implement one digitally in a microcontroller is usually:


FILT <-- FILT + FF(NEW - FILT)


FILT is a piece of persistant state. This is the only persistant variable you need to compute this filter. NEW is the new value that the filter is being updated with this iteration. FF is the filter fraction . which adjusts the "heaviness" of the filter. Look at this algorithm and see that for FF = 0 the filter is infinitely heavy since the output never changes. For FF = 1, it's really no filter at all since the output just follows the input. Useful values are in between. On small systems you pick FF to be 1/2 N so that the multiply by FF can be accomplished as a right shift by N bits. For example, FF might be 1/16 and the multiply by FF therefore a right shift of 4 bits. Otherwise this filter needs only one subtract and one add, although the numbers usually need to be wider than the input value (more on numerical precision in a separate section below).


I usually take A/D readings significantly faster than they are needed and apply two of these filters cascaded. This is the digital equivalent of two R-C filters in series, and attenuates by 12 dB/octave above the rolloff frequency. However, for A/D readings it's usually more relevant to look at the filter in the time domain by considering its step response. This tells you how fast your system will see a change when the thing you are measuring changes.


To facilitate designing these filters (which only means picking FF and deciding how many of them to cascade), I use my program FILTBITS. You specify the number of shift bits for each FF in the cascaded series of filters, and it computes the step response and other values. Actually I usually run this via my wrapper script PLOTFILT. This runs FILTBITS, which makes a CSV file, then plots the CSV file. For example, here is the result of "PLOTFILT 4 4":


The two parameters to PLOTFILT mean there will be two filters cascaded of the type described above. The values of 4 indicate the number of shift bits to realize the multiply by FF. The two FF values are therefore 1/16 in this case.


The red trace is the unit step response, and is the main thing to look at. For example, this tells you that if the input changes instantaneously, the output of the combined filter will settle to 90% of the new value in 60 iterations. If you care about 95% settling time then you have to wait about 73 iterations, and for 50% settling time only 26 iterations.


The green trace shows you the output from a single full amplitude spike. This gives you some idea of the random noise suppression. It looks like no single sample will cause more than a 2.5% change in the output.


The blue trace is to give a subjective feeling of what this filter does with white noise. This is not a rigorous test since there is no guarantee what exactly the content was of the random numbers picked as the white noise input for this run of PLOTFILT. It's only to give you a rough feeling of how much it will be squashed and how smooth it is.


PLOTFILT, maybe FILTBITS, and lots of other useful stuff, especially for PIC firmware development is available in the PIC Development Tools software release at my Software downloads page .


Added about numerical precision


I see from the comments and now a new answer that there is interest in discussing the number of bits needed to implement this filter. Note that the multiply by FF will create Log 2 (FF) new bits below the binary point. On small systems, FF is usually chosen to be 1/2 N so that this multiply is actually realized by a right shift of N bits.


FILT is therefore usually a fixed point integer. Note that this doesn't change any of the math from the processor's point of view. For example, if you are filtering 10 bit A/D readings and N = 4 (FF = 1/16), then you need 4 fraction bits below the 10 bit integer A/D readings. One most processors, you'd be doing 16 bit integer operations due to the 10 bit A/D readings. In this case, you can still do exactly the same 16 bit integer opertions, but start with the A/D readings left shifted by 4 bits. The processor doesn't know the difference and doesn't need to. Doing the math on whole 16 bit integers works whether you consider them to be 12.4 fixed point or true 16 bit integers (16.0 fixed point).


In general, you need to add N bits each filter pole if you don't want to add noise due to the numerical representation. In the example above, the second filter of two would have to have 10+4+4 = 18 bits to not lose information. In practise on a 8 bit machine that means you'd use 24 bit values. Technically only the second pole of two would need the wider value, but for firmware simplicity I usually use the same representation, and thereby the same code, for all poles of a filter.


Usually I write a subroutine or macro to perform one filter pole operation, then apply that to each pole. Whether a subroutine or macro depends on whether cycles or program memory are more important in that particular project. Either way, I use some scratch state to pass NEW into the subroutine/macro, which updates FILT, but also loads that into the same scratch state NEW was in. This makes it easy to apply multiple poles since the updated FILT of one pole is the NEW of the next one. When a subroutine, it's useful to have a pointer point to FILT on the way in, which is updated to just after FILT on the way out. That way the subroutine automatically operates on consecutive filters in memory if called multiple times. With a macro you don't need a pointer since you pass in the address to operate on each iteration.


Code Examples


Here is a example of a macro as described above for a PIC 18:


And here is a similar macro for a PIC 24 or dsPIC 30 or 33:


Both these examples are implemented as macros using my PIC assembler preprocessor. which is more capable than either of the built-in macro facilities.


@clabacchio: Another issue I should have mentioned is firmware implementation. You can write a single pole low pass filter subroutine once, then apply it multiple times. In fact I usually write such a subroutine to take a pointer in memory to the filter state, then have it advance the pointer so that it can be called in succession easily to realize multi-pole filters. & Ndash; Olin Lathrop Apr 20 '12 at 15:03


1. thanks very much for your answers - all of them. I decided to use this IIR Filter, but this Filter is not used as a Standard LowPass Filter, since I need to average Counter Values and compare them to detect Changes in a certain Range. since these Values van be of very different dimensions depending on Hardware I wanted to take an average in order to be able to react to these Hardware specific changes automatically. & Ndash; sensslen May 21 '12 at 12:06


If you can live with the restriction of a power of two number of items to average (ie 2,4,8,16,32 etc) then the divide can easily and efficiently be done on a low performance micro with no dedicated divide because it can be done as a bit shift. Each shift right is one power of two eg:


The OP thought he had two problems, dividing in a PIC16 and memory for his ring buffer. This answer shows that the dividing is not difficult. Admittedly it does not address the memory problem but the SE system allows partial answers, and users can take something from each answer for themselves, or even edit and combine other's answers. Since some of the other answers require a divide operation, they are similarly incomplete since they do not show how to efficiently achieve this on a PIC16. & Ndash; Martin Apr 20 '12 at 13:01


There is an answer for a true moving average filter (aka "boxcar filter") with less memory requirements, if you don't mind downsampling. It's called a cascaded integrator-comb filter (CIC). The idea is that you have an integrator which you take differences of over a time period, and the key memory-saving device is that by downsampling, you don't have to store every value of the integrator. It can be implemented using the following pseudocode:


Your effective moving average length is decimationFactor*statesize but you only need to keep around statesize samples. Obviously you can get better performance if your statesize and decimationFactor are powers of 2, so that the division and remainder operators get replaced by shifts and mask-ands.


Postscript: I do agree with Olin that you should always consider simple IIR filters before a moving average filter. If you don't need the frequency-nulls of a boxcar filter, a 1-pole or 2-pole low-pass filter will probably work fine.


On the other hand, if you are filtering for the purposes of decimation (taking a high-sample-rate input and averaging it for use by a low-rate process) then a CIC filter may be just what you're looking for. (especially if you can use statesize=1 and avoid the ringbuffer altogether with just a single previous integrator value)


answered Apr 20 '12 at 12:54


There's some in-depth analysis of the math behind using the first order IIR filter that Olin Lathrop has already described over on the Digital Signal Processing stack exchange (includes lots of pretty pictures.) The equation for this IIR filter is:


This can be implemented using only integers and no division using the following code (might need some debugging as I was typing from memory.)


This filter approximates a moving average of the last K samples by setting the value of alpha to 1/K. Do this in the preceding code by #define ing BITS to LOG2(K), i. e. for K = 16 set BITS to 4, for K = 4 set BITS to 2, etc.


(I'll verify the code listed here as soon as I get a change and edit this answer if needed.)


answered Jun 23 '12 at 4:04


Here's a single-pole low-pass filter (moving average, with cutoff frequency = CutoffFrequency). Very simple, very fast, works great, and almost no memory overhead.


Note: All variables have scope beyond the filter function, except the passed in newInput


Note: This is a single stage filter. Multiple stages can be cascaded together to increase the sharpness of the filter. If you use more than one stage, you'll have to adjust DecayFactor (as relates to the Cutoff-Frequency) to compensate.


And obviously all you need is those two lines placed anywhere, they don't need their own function. This filter does have a ramp-up time before the moving average represents that of the input signal. If you need to bypass that ramp-up time, you can just initialize MovingAverage to the first value of newInput instead of 0, and hope the first newInput isn't an outlier.


(CutoffFrequency/SampleRate) has a range of between 0 and 0.5. DecayFactor is a value between 0 and 1, usually close to 1.


Single-precision floats are good enough for most things, I just prefer doubles. If you need to stick with integers, you can convert DecayFactor and Amplitude Factor into fractional integers, in which the numerator is stored as the integer, and the denominator is an integer power of 2 (so you can bit-shift to the right as the denominator rather than having to divide during the filter loop). For example, if DecayFactor = 0.99, and you want to use integers, you can set DecayFactor = 0.99 * 65536 = 64881. And then anytime you multiply by DecayFactor in your filter loop, just shift the result >> 16.


For more information on this, an excellent book that's online, chapter 19 on recursive filters: http://www. dspguide. com/ch19.htm


PD For the Moving Average paradigm, a different approach to setting DecayFactor and AmplitudeFactor that may be more relevant to your needs, let's say you want the previous, about 6 items averaged together, doing it discretely, you'd add 6 items and divide by 6, so you can set the AmplitudeFactor to 1/6, and DecayFactor to (1.0 - AmplitudeFactor).


answered May 14 '12 at 22:55


Everyone else has commented thoroughly on the utility of IIR vs. FIR, and on power-of-two division. I'd just like to give some implementation details. The below works well on small microcontrollers with no FPU. There's no multiplication, and if you keep N a power of two, all the division is single-cycle bit-shifting.


Basic FIR ring buffer: keep a running buffer of the last N values, and a running SUM of all the values in the buffer. Each time a new sample comes in, subtract the oldest value in the buffer from SUM, replace it with the new sample, add the new sample to SUM, and output SUM/N.


Modified IIR ring buffer: keep a running SUM of the last N values. Each time a new sample comes in, SUM -= SUM/N, add in the new sample, and output SUM/N.


answered Aug 28 '13 at 13:45


If I'm reading you right, you're describing a first-order IIR filter; the value you're subtracting isn't the oldest value which is falling out, but is instead the average of the previous values. First-order IIR filters can certainly be useful, but I'm not sure what you mean when you suggest that the output is the same for all periodic signals. At a 10KHz sample rate, feeding a 100Hz square wave into a 20-stage box filter will yield a signal that rises uniformly for 20 samples, sits high for 30, drops uniformly for 20 samples, and sits low for 30. A first-order IIR filter. & Ndash; supercat Aug 28 '13 at 15:31


will yield a wave which sharply starts rising and gradually levels off near (but not at) the input maximum, then sharply starts falling and gradually levels off near (but not at) the input minimum. Very different behavior. & Ndash; supercat Aug 28 '13 at 15:32


One issue is that a simple moving average may or may not be useful. With an IIR filter, you can get a nice filter with relatively few calcs. The FIR you describe can only give you a rectangle in time -- a sinc in freq -- and you can't manage the side lobes. It may be well worth it to throw in a few integer multiplies to make it a nice symmetric tunable FIR if you can spare the clock ticks. & Ndash; Scott Seidman Aug 29 '13 at 13:50


@ScottSeidman: No need for multiplies if one simply has each stage of the FIR either output the average of the input to that stage and its previous stored value, and then store the input (if one has the numeric range, one could use the sum rather than average). Whether that's better than a box filter depends on the application (the step response of a box filter with a total delay of 1ms, for example, will have a nasty d2/dt spike when the input change, and again 1ms later, but will have the minimum possible d/dt for a filter with a total 1ms delay). & Ndash; supercat Aug 29 '13 at 15:25


As mikeselectricstuff said, if you really need to reduce your memory needs, and you don't mind your impulse response being an exponential (instead of a rectangular pulse), I would go for an exponential moving average filter. I use them extensively. With that type of filter, you don't need any buffer. You don't have to store N past samples. Just one. So, your memory requirements get cut down by a factor of N.


Also, you don't need any division for that. Only multiplications. If you have access to floating-point arithmetic, use floating-point multiplications. Otherwise, do integer multiplications and shifts to the right. However, we are in 2012, and I would recommend you to use compilers (and MCUs) that allow you to work with floating-point numbers.


Besides being more memory efficient and faster (you don't have to update items in any circular buffer), I would say it is also more natural . because an exponential impulse response matches better the way nature behaves, in most cases.


answered Apr 20 '12 at 9:59


One issue with the IIR filter as almost touched by @olin and @supercat but apparently disregarded by others is that the rounding down introduces some imprecision (and potentially bias/truncation). assuming that N is a power of two, and only integer arithmetic is used, the shift right does systematically eliminate the LSBs of the new sample. That means that how long the series could ever be, the average will never take those into account.


For example, suppose a slowly decreasing series (8,8,8. 8,7,7,7. 7,6,6,) and assume the average is indeed 8 at the beginning. The fist "7" sample will bring the average to 7, whatever the filter strength. Just for one sample. Same story for 6, etc. Now think of the opposite. the serie goes up. THe average will stay on 7 forever, until the sample is big enough to make it change.


Of course, you can correct for the "bias" by adding 1/2^N/2, but that won't really solve the precision problem. in that case the decreasing series will stay forever at 8 until the sample is 8-1/2^(N/2). For N=4 for example, any sample above zero will keep the average unchanged.


I believe a solution for that would imply to hold an accumulator of the lost LSBs. But I didn't make it far enough to have code ready, and I'm not sure it would not harm the IIR power in some other cases of series (for example whether 7,9,7,9 would average to 8 then).


@Olin, your two-stage cascade also would need some explanation. Do you mean holding two average values with the result of the first fed into the second in each iteration. What's the benefit of this ?


answered Nov 23 '14 at 17:25


Motion Lab Systems Software


EMG Analysis Software


The EMG Analysis software is a research quality analysis program that implements a wide range of powerful analysis methods using Fast Fourier Transform (FFT) techniques as well as many traditional EMG analysis methods making it especially suitable for educational as well as research uses. This application, like the basic EMG Graphing program, fully supports all C3D formats as well as several older file formats and raw data from Dataq data acquisition systems for stand-alone functionality. Data can be processed and exported to third-party applications for additional analysis via C3D files or standard ASCII formatted files compatible with Excel, SAS and MATLAB etc.


This is the one of the most powerful, yet easy to use, software packages using FFT analysis that is available to both the clinician and researcher. The program reads EMG data directly from C3D files as well as the native file formats from B|T|S, Motion Analysis Corporation, Vicon Motion Systems. Written for the Kinesiologist driven environment, directly to clinical specifications, this software effortlessly delivers instant viewing and full color reports using sophisticated Frequency Spectrum, Power Spectrum and Muscle Correlation techniques. The EMG Analysis program includes all of the basic features of the EMG Graphing version of this program and added many analysis and data export options:


EMG Graphing Features included


Calibrate your EMG data using gain switches or calibration signals to display EMG amplitude in uV at skin surface.


Includes adult and age-matched child normal activity databases for EMG activity comparisons.


Display EMG normalization by Gait Cycle, signal amplitude and Manual Muscle Test (MMT) signal amplitude.


Data processing includes High Pass and Low Pass filters, DC offset removal, noise reduction, and automatic event determination.


Reads raw EMG data from Dataq, Motion Analysis, Vicon and C3D files


Saves all processed EMG data in C3D files for easy retrieval and exchange.


Templates automatically identify muscle and calibration information when EMG files are opened.


Print multiple and individual EMG Gait Cycle reports displaying the signal as raw or rectified EMG data.


Automatic detection of gait cycles from event switch data, force plate and marker data, or C3D event data.


Export processed EMG gait cycles to ASCII files to create Raw EMG and Rectified EMG files (Excel compatible).


Direct interface to the MLS Report Generator for Presentation Standard Graphics suitable for posters and slides.


EMG Analysis Features


Moving Average Envelope EMG reports with adjustable window size.


Linear Envelope EMG reports with adjustable filter frequencies via dual pass digital FIR filters for zero delay processing.


RMS data Analysis EMG reports with adjustable window size, preprocessing high pass filter


Intensity Filtered Averaging EMG reports with adjustable window size and signal suppression levels.


EMG threshold detection EMG reports with 10 adjustable thresholds using moving average and Linear envelopes.


Zero Crossing detection EMG reports with adjustable hysteresis levels.


Integrate over Time EMG reports with adjustable reset time interval.


Integrate and Reset EMG reports with adjustable reset levels.


EMG Power Spectrum reports with with FFT windows determined by the EMG data.


EMG Amplitude Distribution reports.


Cocontraction correlation EMG reports with user determined low pass filter and optional high pass artifact filters.


Signal aliasing reports for data collection diagnosis and Quality Assurance.


Process and export rectified, window and linear envelope data to C3D files for third-party analysis.


Export processed data as ASCII files using user specified number of data points, optionally normalized or scaled.


Multimedia support - listen to any EMG channel and export EMG data as audio files.


Buy it once, use it everywhere - the EMG Analysis application, like all Motion Lab Systems applications, is a site licensed application. The purchase of an EMG Analysis license allows multiple copies of the software to be used within the licensed environment, permitting its use on multiple computers, laptops etc. making it very easy to use in academic and research environments without any hardware access keys or restrictive licensing requirements.


EMG Trial Reports


Raw EMG Data - opening an EMG data file with EMG Analysis will display the raw, unprocessed data for each recorded EMG channel. The channels can be labeled with the individual muscle names, side and scaled at surface potential (microvolts). Heel contact and Toe-off is optionally indicated by vertical lines on each plot with user selected color coded of the EMG data according to limb side. Once this information is entered, the EMG Analysis will store it in the C3D file as part of the data record as that it will always be available whenever anyone opens the file.


Rectified EMG Data - the EMG data can also be plotted and printed as rectified EMG. The EMG Analysis program offers the user complete control over the size of both the displayed image at all times. Data displayed on the screen can be previewed before sending it to the printer to allow the user to optimize the printed output - EMG trial reports can be compressed onto a single page or printed out in full over several pages.


Envelope EMG Data - the EMG Analysis program supports the display of enveloped EMG data for trial reports, displaying the complete enveloped gait cycle as enveloped data across all EMG channels. The individual gait cycle envelopes can be averaged within the trial and displayed in the EMG Analysis reports and of course, the entire trial can be printed as a single display showing the relationships between the individual muscle contractions.


All three trial display options offer the optional display of normal EMG activity bars underneath the subjects EMG records together with gait cycle events shown as heel strike (solid lines) and toe-off (dotted lines), indicating the stance and swing phases of gait. Each EMG channel can be labeled with side, a label and muscle name together with either the calibrated EMG levels (skin surface or intramusclular), or the actual recorded signal levels in volts.


EMG Analysis Reports and Methods


The EMG Analysis program can generate a gait cycle graph of raw EMG for each recorded EMG channel. Graphs can be scaled by %, or surface potential (microvolts) and display full heel contract to hell contact data with swing and stance phases of gait indicated. "Normal" EMG activity bars can be displayed and printed and each EMG graph labeled with a the muscle name. The muscle names and activity bars are fully configurable by the user and can be edited or translated into any language. The EMG Analysis program allows the user to define individual normal activity and create a number of individually named activity profiles for different age groups and EMG protocols. EMG Analysis is supplied with sample normal activity profiles and can be downloaded and run as an evaluation version on any Windows PC.


In addition to the raw gait cycle EMG Graphs, EMG Analysis can also display the EMG data as a rectified signal. The graph below illustrates this and, in addition, is scaled in microvolts at skin surface as a result of a calibration operation. EMG Analysis supports EMG level calibration via built-in sources (available on most Motion Lab Systems EMG systems), or via an external source if your EMG system does not include calibration facilities.


"Normal" EMG activity bars may be displayed and printed and each EMG graph assigned a muscle name. The muscle names and activity bars are fully configurable by the user and can be edited or translated into any language. The EMG Analysis program allows the user to define individual normal activity and create a number of individually named activity profiles for different age groups and EMG protocols. EMG Analysis is supplied with normal adult and age matched children's activity profiles from documented sources - the supplied normal database is fully editable.


Each EMG graph can be displayed and printed in a user-selected color to indicate the limb (Left/Right) that is assigned to a specific EMG channel. These colors are used in both the screen displays and the printed output. The results of the analysis (both raw gait cycle and rectified gait cycle data) can be exported to Excel compatible ASCII text files, scaled by maximal effort, manual muscle test or skin surface levels.


EMG Analysis can generate gait cycle reports using two envelope methods. Using the first method the user can create reports using a standard user defined window filter with an adjustable window period specified in milliseconds - the graph illustrated to the right was generated using a moving window of 150ms.


Trials that contain multiple cycles of EMG data can average the defined cycles to generate standard deviation displays. Each individual EMG graph is labeled with the muscle name, scaling method, and displays the gait cycle phasic activity together with normal activity bars for adult or age matched normal children. Each printed reports will display the window method, together with the selected gait cycle period and stand/swing percentage. This analysis option is not available in the basic EMG Graphing version.


The second envelope analysis method provided by EMG Analysis applies a user-controlled FIR (Finite Impulse Response) filter to the gait cycle. The cutoff frequency is user-selected - a single or dual pass filter is applied to the envelope to generate the individual plots allowing the graphs to be generated with typical data processing delays for comparison with older data, or with delay free results for current research.


The Averaged data can be plotted as a scaled value or as a percentage of maximum. The graph on the left uses a 6Hz, dual pass filter to smooth the EMG activity. The vertical dotted lines on the graph indicate the transition from stance to swing while the dual pass method eliminates the delay that is inherent in standard windowing methods.


Each of the enveloping and averaging methods has its own set of user controlled preset options that control the windowing period (specified in milliseconds) and the linear envelope filter frequency (specified in Hertz). The EMG analysis application allows the results of the analyses to be exported as ASCII files for further processing by third-party applications including Excel, SAS, MATLAB, Statistica etc - options are provided to normalize the exported data to a set numbers of points so that the exported data can be averaged without further processing. In addition, options are provided that allow the amplitude exported data can be scaled or calibrated. This analysis method is only available in the EMG Analysis application.


The RMS (Root Mean Square) Analysis differs from the Moving Average and Linear Envelope analysis methods in that it starts with raw EMG data instead of rectified EMG data. The analysis option was added to the EMG Analysis software at the request of a user who was able to specify the series of mathematical operations required, and the order in which they were to be performed. Motion Lab Systems is happy to add additional EMG Analysis functionality to the EMG Analysis program when users can provide precise descriptions of the required function.


The RMS analysis performs the following operations on the raw EMG data. A high pass filter with a user selectable cutoff frequency (usually in the range of 1 to 20Hz) is applied to the raw EMG data to remove any DC offset in the data stream. A root mean square value for the EMG data stream is then processed (the raw EMG data values are squared and then the square root calculated) and the resulting output is then averaged over a selectable period, usually in the range of 20 to 499 milliseconds. This analysis method is only available in the EMG Analysis application.


The Intensity Filtered Average (IFA) analysis option implements the algorithm described in the paper Computer Algorithms to Characterize Individual Subject EMG Profiles During Gait published in 1992 by Ross Bogey, Lee Barnes, and Jacquelin Perry, MD. This paper describes a processing method based on both the timing and the relative amplitude of the EMG signal that generates better results than the standard Moving Average or Linear Envelope methods. It is proposed that the IFA method generates an envelope output that more closely represents the onset, duration, and cessation of the subjects muscle activity.


The Intensity Filtered Average method differs from the various enveloping analysis methods because it excludes brief EMG activity bursts (less than 5% of the gait cycle or EMG period), and activity below a preset threshold, from the averaging process. Averaging methods used by the Window, Linear Envelope and RMS analysis methods all generate activity envelopes that include all of the available EMG data. The rational for the IFA method is that by excluding relatively low level EMG activity from the envelope and averaging process, the results better reflect the essential muscle activity during gait as the low intensity and short duration bursts have very little effect on the joint motion.


We developed an intuitive user interface for the Motion Lab Systems EMG Analysis program that places all of the commonly used options and commands on the mouse menu. Extensive field testing allows the program to make optimal use of customized menus that automatically display the correct defaults for the user. EMG Analysis is an exceptionally easy program to use.


The Threshold analysis option in EMG Analysis can produce muscle activity reports of EMG activity within any gait cycle. The activity can be quantisized over a number of individually adjustable activity levels using EMG envelope data derived from either FIR filtered data or a moving window. Although most reports will divide the EMG activity into two or three levels, thye Threshold analysis method supports up to ten individual activity levels. Unlike many programs that simply report the EMG activity as simply on or off, EMG Analysis allows the user to display and print reports that overlay the raw EMG data on the EMG activity display to confirm that the selected activity thresholds are correct. As with other EMG Analysis reports, normal EMG activity bars can be displayed from the included age-matched child and adult normal database. This analysis method is only available in the EMG Analysis application.


The Zero crossing analysis reports the number of times that the raw EMG signal crosses zero. This is reported both as a numeric value (shown in brackets by the muscle name and title, above the graph), and graphically.


This EMG Analysis function implements a function that controls the amount of hysteresis in the comparator function used by the zero crossing analysis. Hysteresis is the difference between the EMG signal level input and the DC zero level and setting this parameter controls the point at which the comparator turns off and turns on. The total number of Zero Crossings for the analyzed period is displayed after the muscle name. High values for the comparator hysteresis value reduce the number of zero crossings that are detected, thus reducing the functions sensitivity to baseline noise in the EMG data. This analysis method is only available in the EMG Analysis application.


The Integrate over Time analysis is performed on the rectified raw EMG signal. The process of integration over time produces an output that is proportional to the level of the EMG signal over a given period that the user can select. The integration process sums the rectified EMG values for the selected time period – at the end of the time period the signal output is reset to zero and the integration process restarted.


The EMG Analysis program implements this function with a single option that defines the integration period in milliseconds. This can bebe set to a value between 1ms and 2000ms (2 seconds). Typical values are between 10 and 200ms depending on the activity period under investigation. This analysis method is only available in the EMG Analysis application.


Like its companion function, the Integrate and Reset Analysis is a mathematical function that is performed on the rectified raw EMG signal by the EMG Analysis program. This process of integration and reset produces an output that is proportional to the level of the EMG signal.


The integration process sums the rectified EMG values until the EMG reaches a set percentage of the average level – once this level is reached the signal output is reset to zero and the integration process restarted. This results in a series of saw-tooth waveforms whos repetion rate is directly proportional to the amplitute of the original EMG signal.


Both the Integrate over Time and the Integrate and Reset analysis functions were popular when only limited computation power was avaialbe to the researcher as both of these functions can be easily implemented in hardware. They are included in the EMG Analysis program as reference methods for the researcher and classroom demonstration.


In addition to the FIR filtering, included as part of the EMG data pre-processing features, EMG Analysis includes a number of advanced analysis features that use fast Fourier Transform (FFT) techniques. The FFT is an efficient algorithm, well suited to computation, that computes the discrete Fourier transform (DFT) and its inverse in a rapid and efficient manner. FFTs are used in a wide variety of applications and are particularly useful in performing frequency analysis functions within this application. The EMG Analysis program uses FFTs to perform EMG Power Spectrum Analysis, EMG Amplitude distribution Analysis, Correlation Analysis and Signal Aliasing analysis in addition to EMG signal filtering.


The EMG Power Spectrum analysis displays the distribution of the various frequency components in the EMG signal. The user has complete control of all the Power Spectrum variables and can select the start frequency and end frequency points for the spectra. Power Spectrum analysis can be performed over the entire EMG period or any EMG burst can be manually selected.


The default method uses the processed EMG envelope to automatically determine and analyze the longest period of activity for each individual muscle thus restricting the Power Spectrum analysis to the major burst of activity and eliminating the baseline from the FFT calculations. This method produces a spectral report that displays the EMG activity without biasing the resuts with baseline noise. In addition, a further option allows the user to restrict the Power Spectrum analysis to ignore EMG signals below a fixed percentage of the EMG signal.


The Amplitude Distribution Analysis measures the amplitude density function and displays the amplitude distribution of the EMG signal representing the relative percentage of time that the EMG signal is at a given amplitude. In this analysis the range of EMG amplitudes is plotted across the horizontal axis while the total time at each level is plotted along the side.


This analysis is useful in looking at activity over longer periods then individual gait cycles. High levels at the left side of the graph (lower EMG levels) indicate that EMG activity is relatively light whereas the further the activity extends to the right side of the graph suggests a greater degree of muscle activity.


Most amplitude distribution graphs will display either resting activity (a single peak to the left) or a double peak indicating both resting and work periods within the analyzed EMG recording. Amplitude Distribution analysis is a useful tool in biofeedback and relaxation analysis as well as an investigation into muscle activity and ergonomics. Some studies have indicated that workers whose amplitude distributions are strongly skewed to the right tend to report more muscle pain than others.


The Cocontraction Analysis allows you to view the degree of correlation between the simultaneous activation of antagonistic muscles. The EMG Analysis program includes a muscle cocontraction analysis function that allows the user to chose pairs of antagonistic muscle to compare. This function processes the rectified and smoothed EMG data using high pass of low pass filters that can be set by the user to provide a consistent cocontraction measurement environment.


The concontraction results can be plotted as processed envelope graphs with each muscle identified by color and also exported as an ASCII text file for additional processing. A concontraction index is calculated and displayed for each muscle pair.


If an analog data signal is not sampled at a high enough rate then signal aliasing can corrupt the sampled data by folding back all analog signals that have a frequency greater than half the sampling frequency (also known as the Nyquist point). This is the phenomenon that is known as aliasing and can cause problems when recording EMG signals. It is very important to avoid aliasing artifact in an EMG signal as is can not be removed from the EMG data. The EMG Analysis aliasing function can be used on any recorded EMG data to help select a suitable EMG sampling rate. In this example the selected EMG channel shows that aliasing signals are present in the data indicating that there are problems with the recording sampling rates which were too low for the frequencies present in the EMG data.


The EMG Analysis application, like all Motion Lab Systems applications, is a site licensed application. The purchase of an EMG Analysis license allows multiple copies of the software to be used within the licensed environment, permitting its use on multiple computers, laptops etc. making it particularly useful in academic and research environments as it does not require hardware access keys or impose restrictive licensing requirements.


Motion Lab Systems, Inc . 15045 Old Hammond Highway, Baton Rouge, LA 70816 USA | MAP sales@motion-labs. com | Phone: +1 (225) 272-7364 | Fax: +1 (225) 272-7336 © 2008 - Motion Lab Systems, Inc.


MATLAB and R code associated with our book Statistical Modeling and Computation (joint with Dirk Kroese) is available at the book website.


If you want to download the code associated with a particular paper, it will be easier to locate it at my research page. Below I organize the code by topics.


Please contact me if you find any errors.


Stochastic Volatility and GARCH Models


Time-Varying Parameter Vector Autoregressions


Marginal Likelihood and Deviance Information Criterion


Bayes factor computation for time-varying coefficients vs constant coefficients


Observed-data and conditional DICs computation for 7 SV models


Marginal likelihood computation for 7 SV and 7 GARCH models


Three variants of the DIC for three latent variable models: static factor model, TVP-VAR and semiparametric regression


Marginal likelihood computation for 6 models using the cross-entropy method: VAR, dynamic factor VAR, TVP-VAR, probit, logit and t-link


Inflation Modeling


Other Sources


A number of econometricians have provided code associated with their books or papers:


MATLAB code associated with Gary Koop's books, papers and short courses can be found on his website .


Dimitris Korobilis provides code for estimating a wide variety of models, including Bayesian VARs, TVP-VARs and factor models.


Jouchi Nakajima provides MATLAB and R code for estimating various stochastic volatility models, including a TVP-VAR with SV.


Copyright © Joshua Chan.


Todos los derechos reservados.


Forecasting by Smoothing Techniques


This site is a part of the JavaScript E-labs learning objects for decision making. Other JavaScript in this series are categorized under different areas of applications in the MENU section on this page.


A time series is a sequence of observations which are ordered in time. Inherent in the collection of data taken over time is some form of random variation. Existen métodos para reducir la cancelación del efecto debido a la variación aleatoria. Widely used techniques are "smoothing". These techniques, when properly applied, reveals more clearly the underlying trends.


Enter the time series Row-wise in sequence, starting from the left-upper corner, and the parameter(s), then click the Calculate button for obtaining one-period-ahead forecasting.


Blank boxes are not included in the calculations but zeros are.


In entering your data to move from cell to cell in the data-matrix use the Tab key not arrow or enter keys.


Features of time series, which might be revealed by examining its graph . with the forecasted values, and the residuals behavior, condition forecasting modeling.


Moving Averages: Moving averages rank among the most popular techniques for the preprocessing of time series. They are used to filter random "white noise" from the data, to make the time series smoother or even to emphasize certain informational components contained in the time series.


Exponential Smoothing: This is a very popular scheme to produce a smoothed Time Series. Whereas in Moving Averages the past observations are weighted equally, Exponential Smoothing assigns exponentially decreasing weights as the observation get older. In other words, recent observations are given relatively more weight in forecasting than the older observations. Double Exponential Smoothing is better at handling trends. Triple Exponential Smoothing is better at handling parabola trends.


An exponenentially weighted moving average with a smoothing constant a. corresponds roughly to a simple moving average of length (i. e. period) n, where a and n are related by:


a = 2/(n+1) OR n = (2 - a )/ a.


Thus, for example, an exponenentially weighted moving average with a smoothing constant equal to 0.1 would correspond roughly to a 19 day moving average. And a 40-day simple moving average would correspond roughly to an exponentially weighted moving average with a smoothing constant equal to 0.04878.


Holt's Linear Exponential Smoothing: Suppose that the time series is non-seasonal but does display trend. Holt’s method estimates both the current level and the current trend.


Notice that the simple moving average is special case of the exponential smoothing by setting the period of the moving average to the integer part of (2-Alpha)/Alpha.


For most business data an Alpha parameter smaller than 0.40 is often effective. However, one may perform a grid search of the parameter space, with = 0.1 to = 0.9, with increments of 0.1. Then the best alpha has the smallest Mean Absolute Error (MA Error).


How to compare several smoothing methods: Although there are numerical indicators for assessing the accuracy of the forecasting technique, the most widely approach is in using visual comparison of several forecasts to assess their accuracy and choose among the various forecasting methods. In this approach, one must plot (using, e. g. Excel) on the same graph the original values of a time series variable and the predicted values from several different forecasting methods, thus facilitating a visual comparison.


You may like using the Past Forecasts by Smoothing Techniques JavaScript to obtain the past forecast values based on smoothing techniques that use only single parameter. Holt, and Winters methods use two and three parameters, respectively, therefore it is not an easy task to select the optimal, or even near optimal values by trial-and – errors for the parameters.


The single exponential smoothing emphasizes the short-range perspective; it sets the level to the last observation and is based on the condition that there is no trend. The linear regression, which fits a least squares line to the historical data (or transformed historical data), represents the long range, which is conditioned on the basic trend. Holt’s linear exponential smoothing captures information about recent trend. The parameters in Holt’s model is levels-parameter which should be decreased when the amount of data variation is large, and trends-parameter should be increased if the recent trend direction is supported by the causal some factors.


Short-term Forecasting: Notice that every JavaScript on this page provides a one-step-ahead forecast. To obtain a two-step-ahead forecast . simply add the forecasted value to the end of you time series data and then click on the same Calculate button. You may repeat this process for a few times in order to obtain the needed short-term forecasts.


LOESS Smoothing in Excel


In 1979 William Cleveland published the LOESS (or LOWESS) technique for smoothing data, and in 1988 he and Susan J. Devlin published a refined version of the technique (references are given at the end of this article). For each X value where a Y value is to be calculated, the LOESS technique performs a regression on points in a moving range around the X value, where the values in the moving range are weighted according to their distance from this X value.


The NIST Engineering Statistics Handbook has a good description of the LOESS technique, including a worked example. A commenter named Nick used the NIST chapter as a starting point for his implementation of a LOESS function for Excel, and he posted in in a comment on JunkCharts. Nick’s approach was to create a UDF in VBA. The UDF accepts as inputs the X and Y data ranges, the number of points to use in the moving regression, and the X value for which to calculate Y. Nick’s UDF used Dectionary objects to hold intermediate values, and it outputs the Y value for the input X value.


I’ve expanded on Nick’s starting point, and produced the function presented later in this article. I’ve discarded the Dictionary objects in favor of VB arrays. It accepts the input X and Y data and the output X values either as ranges or as vertical arrays, and it outputs the calculated LOESS Y values as a vertical array. This means it can be called from within other procedures using arrays, or as a UDF from a worksheet, as an array formula. The original data must be sorted by X in either ascending or descending order (Nick’s Dictionaries do not require sorted input data, and I have an idea to remove the requirement from my function).


My algorithm, Nick’s original algorithm I based mine on, the NIST algorithm Nick based his upon, and all others I checked in the wild use a linear regression of the weighted values, and the weighting factor for input data point i is a sigmoidal curve based on


where X(i) is the normalized distance (along the X axis) between input data point i and the output X value at which the LOESS smoothed value is being computed. The normalization X is the distance/(maximum distance among points in the moving regression).


To use the function as a UDF, select the multicell output Y range, and enter this formula:


where C2:C22 and D2:D22 are the input X and Y ranges, F2:F21 is the output X range, and 7 is the number of points in the moving regression (see screenshot below).


Enter this as an array formula by holding Ctrl and Shift while pressing Enter, and the selection fills with the calculated Y values. Note the curly braces around the formula in the formula bar, which indicates the formula is an array formula.


This chart shows the original NIST data points and the smoothed LOESS curve.


In Local regression. Wikipedia has a decent description of LOESS, with some pros and cons of this approach compared to other smoothing methods.


Example Uses of LOESS


This chart compares LOESS smoothing of website statistics with a simple 7-day moving average. The LOESS captures the major trends in the data, but is less severely affected by week to week fluctuations such as those occuring around Thanksgiving and over the year-end and New Year holidays.


Using LOESS to analyze the body mass indexes (BMI) of Playboy playmates gives more insights than linear regression over the whole data set or over portions of the data. See the discussion in Wired Relates Playboy Playmate BMI and Average BMI, 1954-2008 on the FlowingData blog.


The LOESS Function


The flexibility of this LOESS function will make it easy to encapsulate into an add-in that uses a dialog to facilitate user selection of data and parameters. A working version uses the following dialog:


A LOESS utility for Excel has finally been made ready for public consumption. It is described in LOESS Utility for Excel. where there is a link to download the utility. It’s still in preliminary form, but runs pretty much trouble free. Users are encouraged to comment on it to drive further development.


The LOESS utility for Excel has been updated, and the interface made more flexible. It is described in LOESS Utility – Awesome Update. where there is a link to download the new utility.


Cleveland, W. S. (1979), “Robust Locally Weighted Regression and Smoothing Scatterplots,” Journal of the American Statistical Association . Vol. 74, pp. 829-836.


Cleveland, W. S. and Devlin, S. J. (1988), “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting,” Journal of the American Statistical Association, Vol. 83, pp. 596-610.


Compartir este:


Jon, nice stuff. I see that you’re added a link to this page on Wikipedia :D I haven’t done a lot of digging on the subject, but I was wondering how one would determine the optimum values for alpha and N. In moving average forecasing, for example, one would calculate the interval (simple moving average) or the damping factor (exponential smoothing) based on minimizing the mean absolute percentage error (which Excel’s ATP assumes you’re already calculated in your head!).


I don’t know how you’d optimize your alpha. A couple sources I looked at said “around 0.33” or “below 0.5”, but I think you’re left to decide based on how it looks. If N gets too large, the curve doesn’t even follow most of the points, but if it isn’t large enough, the curve is too wiggly.


I used N=7 for the NIST data, because that’s what the example used, though I looked at a lot of other values. I used probably alpha=0.33 for the playmate data, and I looked at a few other values, but it didn’t make much difference. The web stats showed some difference with changing alpha, but mostly in the width and depth of the year-end dip.


I think the idea is to get a good overview which isn’t too badly affected by an outlying point.


If anyone has any guidelines other than “it looks nice”, please share with us.


I stumbled upon that old comment of mine at JunkCharts, and saw your comment as well which led me here. First let me say: Wow, you’ve obviously got more VBA skills than I do, and although I haven’t tested it it looks great.


As for the question about using 2nd degree polynomials or higher instead of linear. You could replace the stuff after ‘do sum of squares’ with some trickery involving the LINEST function. (See Clyde38’s post at http://www. eng-tips. com/viewthread. cfm? qid=184726 ) but you’d lose the weighting.


I have to say that I wouldn’t have gotten too far without your coding of the original function.


As for the higher order fit, it seems most people that I noticed just use first order, and second, I would want somehow to keep the weighting. I thought maybe a little dimensional analysis might help, but I need at least an hour to get my brain around stuff like that.


Bang Seung Beom says:


Please see the difference of the output:


X input Y input X output original output new output 0.55782 18.63654 0 -13.07212037 20.59304051 2.021727 103.4965 1 47.06914445 107.1603064 2.577325 150.3539 2 105.8974689 139.7673806 3.414029 190.5103 3 159.6924955 174.2630716 4.301408 208.7012 4 196.8626739 207.2333938 4.744839 213.7114 5 219.0115635 216.6616039 5.107378 228.4935 6 227.6417493 220.5444981 6.541166 233.5539 7 230.2534309 229.8606994 6.721618 234.5505 8 227.0376643 229.8347242 7.260058 223.8923 9 221.246659 229.4301269 8.133587 227.6834 10 202.9876659 226.6044626 9.122438 223.9198 11 187.8518139 220.3904231 11.92967 168.02 12 170.6711911 172.3479193 12.37977 164.9575 13 162.210816 163.8416617 13.27286 152.6111 14 160.7756941 161.8489846 14.27675 160.7874 15 159.7155843 160.3350921 15.3731 168.5557 16 161.0838629 160.1920102 15.64766 152.4266 17 194.5740586 161.0555463 18.56054 221.707 18 215.4190384 227.3399984 18.58664 222.6904 19 236.7904956 227.8985782 18.75728 243.1883


Your original output is the result of LOESS calculation using a 7 point moving regression, using X and Y input as inputs and X output as the X output.


Your new output is the result of LOESS calculation using a 7 point moving regression, using X and Y input as inputs and X input as the X output.


In MathCAD 14, you create a 2-column matrix of the data and set the first column = X and the second column = Y. I named my matrix NIST. Then enter the following:


where x was set as a range variable x:=0,0.1. 20


This creates a function NIST(x) that you can plot versus the range variable x


Here is the MathCAD explanation for the two built-in functions ‘interp’ &erio; ‘loess’:


loess(vx, vy, span) Returns a vector which interp uses to find a set of second-order polynomials that best fit the neighborhood of x and y data values in vx and vy in the least-squares sense. The size of the neighborhood is controlled by span.


interp(vs, vx, vy, x) Returns the interpolated y-value corresponding to x using the output vector vs from loess.


I guess your page explains LOESS more practical than all the rocket science pages I tried before I came here.


You have a big glitch in your moving average example. MA always get a lag of n/2 points. The reason is that the MA is the 50% trend value of one linear regression of n x, y points. Both smoothing filters are comparable in the X axis if the MA is calculated from the n/2 days in the future. You would use the MA with OFFSET and AVG functions then and not take the diagram’s moving average. However, some example charts indicate lags in the LOESS. am I wrong?


By the way, I always was and am a big fan of your Excel experiences. Big timesavers for me.


A comparison of moving average to loess would make more sense if the calculated moving average was plotted in the middle of the data for the moving average, but conventionally this is not done. Loess never used this convention.


Where I have noted an apparent lag in loess is in cases where the variation in the raw data is not symmetric, for example, when an increase is followed by a decrease which is much more or less steep. Changing the number of points in the moving regression may reduce the apparent lag. The apparent lag may be pronounced near the ends of the data set, where the regression consists of substantially more points on one side of the X value for the loess calculation.


>The apparent lag may be pronounced near the ends of the data set That’s what I know from moving average fitting. It was designed to fit a time series where mostly the newest fit is the interesting one.


From the VBA code I would say that LOESS fitting is really a choice for sparse data series. Time series make only sense if there are gaps in between. I used a similar technique already on time series in Excel like in your example.


If the distances are always the same between two X points, then the algorithm is just too general and consuming. My experience is that you always find a moving average that is as almost as good fitting as the moving regression window.


By the way: Why did you use Redim in a for-loop? There is another redim going over all available data. One second Redim before the for would take more memory and save time.


“I would say that LOESS fitting is really a choice for sparse data series.” The literature indicates that loess is better for heavily populated data sets.


Do your moving averages weight the data prior to averaging?


The ReDim statements and the iMin and iMax statements can all be taken out of the loop. Perhaps the original programmer initially had them inside the loop before I saw the code, and then I never put them in front of the loop.


I have count data (around 20,000 cases) for different days of the weeks. There are outliers in my series. Since my data is sparse, I cannot remove these outliers completely. I am planning to assign the weights to these outliers in order to smooth the data series.


For this I thought of using either Loess or rloess. Please suggest is it correct to do so?


Does a version of the above LOESS function (VBA code) exist for HORIZONTAL array inputs?


I have a model with X, Y, and xDomain in rows rather than columns, and use intermediary TRANSPOSE functions as a workaround. However, it would be nice to have both a Horizontal and Vertical version of the LOESS function (similar to how VLOOKUP has the HLOOKUP counterpart). I tried modifying the code to work for Horizontal inputs, but apparently I don’t understand the code well enough to make it work.


Thanks, by the way, for this excellent info.


Well, it’s generally best practice to have data series in columns, since this is in accord with a database output’s layout of fields in columns and records in rows. So my advice is to transpose your whole workbook. Of course, I don’t know what you’re doing and why it’s arranged that way, I’m just generalizing.


You might adjust the input ranges like this:


I have no plans to rewrite my add-in’s code to handle horizontal ranges.


Many thanks for your quick reply!


By the way, it’s a legacy financial model, which uses a SAS-summarized dataset of aggregated monthly transaction records for a segment of accounts. Rows are metrics like deposits, withdrawals, account attrition, etc. ‘Month’ is in columns. So the dataset, model, and graphs show how the metrics have moved/will move over time.


Oh, “legacy”… That can get tough.


I’m about to drop support for Microsoft’s legacy product, Excel 2003, even though it’s my favorite. It’s just too much effort to support two of everything, and users of 2001/2010 now far outnumber users of 2003.


Charles Vanya says:


How can download Loess smoothing tool for excel. I have been trying to download it but it is just displaying encrypted information. How can I get it?


BROWNIAN_MOTION_SIMULATION Simulation of Brownian Motion in M Dimensions


BROWNIAN_MOTION_SIMULATION is a MATLAB library which simulates Brownian motion in an M-dimensional region.


Brownian motion is a physical phenomenon which can be observed, for instance, when a small particle is immersed in a liquid. The particle will move as though under the influence of random forces of varying direction and magnitude.


There is a mathematical idealization of this motion, and from there a computational discretization that allows us to simulate the successive positions of a particle undergoing Brownian motion.


Uso:


x = brownian_motion_simulation ( n . m . d . t ) where


n is the number of time steps to take (default 1000);


m is the spatial dimension, (default 2);


d is the diffusion coefficient, (default 10.0);


t is the total time interval (default 1.0);


Licensing:


The computer code and data files described and made available on this web page are distributed under the GNU LGPL license.


Idiomas:


Related Data and Programs:


DICE_SIMULATION. a MATLAB program which simulates N tosses of M dice, making a histogram of the results.


DUEL_SIMULATION. a MATLAB program which simulates N repetitions of a duel between two players, each of whom has a known firing accuracy.


GAMBLERS_RUIN_SIMULATION. a MATLAB program which simulates the game of gambler's ruin.


HIGH_CARD_SIMULATION. a MATLAB program which simulates a situation in which you see the cards in a deck one by one, and must select the one you think is the highest and stop.


ISING_2D_SIMULATION. a MATLAB program which carries out a Monte Carlo simulation of an Ising model, a 2D array of positive and negative charges, each of which is likely to "flip" to be in agreement with neighbors.


LORENZ_SIMULATION. a MATLAB program which solves the Lorenz equations and displays the solution, for various starting conditions.


POISSON_SIMULATION. a MATLAB library which simulates a Poisson process in which events randomly occur with an average waiting time of Lambda.


RANDOM_WALK_1D_SIMULATION. a MATLAB program which simulates a random walk in a 1-dimensional region.


RANDOM_WALK_2D_SIMULATION. a MATLAB program which simulates a random walk in a 2-dimensional region.


RANDOM_WALK_2D_AVOID_SIMULATION. a MATLAB program which simulates a self-avoiding random walk in a 2-dimensional region.


RANDOM_WALK_3D_SIMULATION. a MATLAB program which simulates a random walk in a 3-dimensional region.


REACTOR_SIMULATION. a MATLAB program which a simple Monte Carlo simulation of the shielding effect of a slab of a certain thickness in front of a neutron source. This program was provided as an example with the book "Numerical Methods and Software."


SDE. a MATLAB library which illustrates the properties of stochastic differential equations, and common algorithms for their analysis, by Desmond Higham;


SIR_SIMULATION. a MATLAB program which simulates the spread of a disease through a hospital room of M by N beds, using the SIR (Susceptible/Infected/Recovered) model.


THREE_BODY_SIMULATION. a MATLAB program which simulates the behavior of three planets, constrained to lie in a plane, and moving under the influence of gravity, by Walter Gander and Jiri Hrebicek.


TRAFFIC_SIMULATION. a MATLAB program which simulates the cars waiting to get through a traffic light.


TRUEL_SIMULATION. a MATLAB program which simulates N repetitions of a duel between three players, each of whom has a known firing accuracy.


Source Code:


brownian_motion_simulation. m. simulates Brownian motion.


brownian_motion_display. m. plots a Brownian motion trajectory for the case M = 2.


brownian_displacement_simulation. m. computes the squared displacement over time, for an ensemble of cases.


brownian_displacement_display. m. plots Brownian motion displacement versus the expected behavior for an ensemble of cases.


timestamp. m. prints the YMDHMS date as a timestamp.


Examples and Tests:


Some plots are made by the test program.


motion_1d. png. a plot of a Brownian motion trajectory in 1D, with time as second dimension.


motion_2d. png. a plot of a Brownian motion trajectory in 2D.


motion_3d. png. a plot of a Brownian motion trajectory in 3D.


displacement_1d. png. a plot of squared displacements, averaged over several 1D Brownian motions.


displacement_2d. png. a plot of squared displacements, averaged over several 2D Brownian motions.


displacement_3d. png. a plot of squared displacements, averaged over several 3D Brownian motions.


Last revised on 30 September 2012.


Временное дискретное преобразование Фурье фильтра скользящего среднего


Амплитудно-частотная характеристика двумерного фильтра скользящего среднего.


Commodity Channel Index (CCI)


Commodity Channel Index Technical Indicator (CCI) measures the deviation of the commodity price from its average statistical price. High values of the index point out that the price is unusually high being compared with the average one, and low values show that the price is too low. In spite of its name, the Commodity Channel Index can be applied for any financial instrument, and not only for the wares.


There are two basic techniques of using Commodity Channel Index:


Finding the divergences The divergence appears when the price reaches a new maximum, and Commodity Channel Index can not grow above the previous maximums. This classical divergence is normally followed by the price correction.


As an indicator of overbuying/overselling Commodity Channel Index usually varies in the range of ±100. Values above +100 inform about overbuying state (and about a probability of correcting decay), and the values below 100 inform about the overselling state (and about a probability of correcting increase).


Cálculo:


To find a Typical Price. You need to add the HIGH, the LOW, and the CLOSE prices of each bar and then divide the result by 3.


To calculate the n-period Simple Moving Average of typical prices.


REMST: MATLAB function to remove trend and seasonal component using the moving average method


Abstract: Y = REMST returns a time series with removed polynomial trend and seasonal components of a given period. As additional output parameters it also returns the identified seasonal component and the fitted polynomial coefficients. REMST uses the moving average technique (see eg. Weron (2006) "Modeling and Forecasting Electricity Loads and Prices", Wiley, Section 2.4.3).


Related works: This item may be available elsewhere in EconPapers: Search for items with the same title.


Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text


Ordering information: This software item can be ordered from http://repec. org/docs/ssc. php


More software in Statistical Software Components from Boston College Department of Economics Boston College, 140 Commonwealth Avenue, Chestnut Hill MA 02467 USA. Contact information at EDIRC. Series data maintained by Christopher F Baum ( ).


The VLFeat open source library implements popular computer vision algorithms specializing in image understanding and local features extraction and matching. Algorithms include Fisher Vector, VLAD, SIFT, MSER, k-means, hierarchical k-means, agglomerative information bottleneck, SLIC superpixels, quick shift superpixels, large scale SVM training, and many others. It is written in C for efficiency and compatibility, with interfaces in MATLAB for ease of use, and detailed documentation throughout. It supports Windows, Mac OS X, and Linux. The latest version of VLFeat is 0.9.20 .


Descargar


Documentation


Tutorials


Example applications


Citing


Expresiones de gratitud


UCLA Vision Lab Oxford VGG .


Noticias


14/1/2015 VLFeat 0.9.20 released Maintenance release. Bugfixes. 12/9/1014 MatConvNet Looking for an easy-to-use package to work with deep convolutional neural networks in MATLAB? Check out our new MatConvNet toolbox. 12/9/2014 VLFeat 0.9.19 released Maintenance release. Minor bugfixes and fixes compilation with MATLAB 2014a. 29/01/2014 VLFeat 0.9.18 released Several bugfixes. Improved documentation, particularly of the covariant detectors. Minor enhancements of the Fisher vectors. [Details ] 22/06/2013 VLFeat 0.9.17 released Rewritten SVM implementation, adding support for SGD and SDCA optimizers and various loss functions (hinge, squared hinge, logistic, etc.) and improving the interface. Added infrastructure to support multi-core computations using OpenMP. Added OpenMP support to KD-trees and KMeans. Added new Gaussian Mixture Models, VLAD encoding, and Fisher Vector encodings (also with OpenMP support). Added LIOP feature descriptors. Added new object category recognition example code, supporting several standard benchmarks off-the-shelf. This is the third point update supported by the PASCAL Harvest programme . [Details ] 01/10/2012 VLBenchmarks 1.0-beta released. This new project provides simple to use benchmarking code for feature detectors and descriptors. Its development was supported by the PASCAL Harvest programme . [Details ] 01/10/2012 VLFeat 0.9.16 released Added VL_COVDET() (covariant feature detector). This function implements the following detectors: DoG, Hessian, Harris Laplace, Hessian Laplace, Multiscale Hessian, Multiscale Harris. It also implements affine adaptation, estimation of feature orientation, computation of descriptors on the affine patches (including raw patches), and sourcing of custom feature frame. Added the auxiliary function VL_PLOTSS(). This is the second point update supported by the PASCAL Harvest programme . [Details ] 11/9/2012 VLFeat 0.9.15 released Added VL_HOG() (HOG features). Added VL_SVMPEGASOS() and a vastly improved SVM implementation. Added IHASHSUM (hashed counting). Improved INTHIST (integral histogram). Added VL_CUMMAX(). Improved the implementation of VL_ROC() and VL_PR(). Added VL_DET() (Detection Error Trade-off (DET) curves). Improved the verbosity control to AIB. Added support for Xcode 4.3, improved support for past and future Xcode versions. Completed the migration of the old test code in toolbox/test, moving the functionality to the new unit tests toolbox/xtest. Improved credits. This is the first point update supported by the PASCAL Harvest (several more to come shortly). A big thank to our sponsor! [Details ]. 10/1/2012 PASCAL2 Harvest funding In the upcoming months many new functionalities will be added to VLFeat thanks to the PASCAL Harvest. See here for details.


&dupdo; 2007-13 The authors of VLFeat


Sinewave and Sinusoid+Noise Analysis/Synthesis in Matlab


Many sounds of importance to human listeners have a pseudo-periodic structure, that is over certain stretches of time, the waveform is a slightly-modified copy of what it was some fixed time earlier, where this fixed time period is typically in the range of 0.2 - 10 ms, corresponding to a fundamental frequency of 100 Hz - 5 kHz, usually giving rise to a corresponding pitch percept.


Periodic signals can be approximated by a sum of sinusoids whose frequencies are integer multiples of the fundamental frequency and whose magnitudes and phases can be uniquely determined to match the signal - so-called Fourier analysis. One manifestation of this is the spectrogram, which shows short-time Fourier transform magnitude as a function of time. A narrowband spectrogram (i. e. one produced with a short-time window longer than the fundamental period of the sound) will reveal a series of nearly-horizontal, uniformly-spaced energy ridges, corresponding to the sinusoidal Fourier components or harmonics that are an equivalent representation of the sound waveform. Below is a spectrogram of a brief clarinet melody; the harmonics are clearly defined.


The key idea behind sinewave modeling is to represent each one of those ridges explicitly and separately as a set of frequency and magnitude values. The resulting sinusiod tracks can be resynthesized by using them as control parameters to a sinewave oscillator. Resynthesis can be complete or partial, and can be modified for instance by stretching in time and frequency, or by some more unusual technique.


Contenido


Sinewave analysis


Sinewave analysis is in concept quite simple: Form the short-time Fourier transform magnitude (as shown in the spectrogram below), find the frequencies and magnitudes of the spectral peaks at each time step, thread them together, and you've got your representation.


In practice, it gets a little complicated for a couple of reasons. Firstly, picking peaks is sometimes difficult: if there's a very slight local maximum on the 'shoulder' of a bigger peak, does that count or not? Also, the resolution of the STFT is typically not all that good (perhaps 128 bins spanning 4 kHz, or about 30 Hz), so you need to interpolate the maximum in both frequency and magnitude. However, this basically works.


[R, M]=<extractrax. m extractrax>(S, T) does this tracking stage (see below for explanation of arguments). It actually has some fairly complex heuristics internally to decide when a track's magnitude suggests that a new track should be formed, but it works well in many cases. Usage is as below;


Notice that a few tracks have picked up the 'non-harmonic' sinusoids between the main harmonics. These, I think, are transient resonances at an octave below the main note. If we were doing strictly harmonic analysis, these would be excluded.


The R and M matrices returned by extractrax. m have one row for each track generated by the system, and one column for each time frame in the original spectrogram. A given track is defined by the corresponding row from each matrix. Most tracks will only exist for a subset of the time steps, so their magnitudes are set to zero and their frequencies are set to NaN for the steps where they don't exist (using NaN allows the plotting trick above, since NaN values are discarded by plot()).


Resynthesis


We can get a rough resynthesis based on this analysis by using a simple sinewave oscillator bank (originally developed for sinewave speech replicas). X=synthtrax(F, M,SR, W,H) takes as input frequency and magnitude matrices F and M as generated above, an output sampling rate SR and the number of samples represented by each column of the track-definition matrices i. e. the analysis hop size H. Thus:


Residual extraction


Tracking the harmonic peaks and resynthesizing them with sinusoids worked pretty well. But some energy was not reproduced, such as the breath noise that did not result in any strong harmonic peaks. In theory, we ought to be able to recover that part of the signal by subtracting our resynthesis of the harmonics from the full original signal. We could then see what they sounded like, or perhaps model them some other way.


In practice, this won't work unless we are very careful to make the frequencies, magnitudes and phases of the reconstructed sinusoids exactly match the original. We didn't worry about phase reconstruction in the previous section, because it has little or no effect on the perceived sound. But if we want to cancel out the harmonics, we will need both to model it and to match it in reconstruction. Thus we need some new functions:


[I, S] = ifgram (X, N,W, H,SR) calculates both a conventional spectrogram (retuned in S) and an 'instantaneous frequency' gram, formed by taking the discrete derivative of the phase in each STFT frequency channel along time. This permits a more accurate estimate of peak frequencies. X is the sound, N is the FFT length, W is the time window length, H is the hop advance, and SR is the sampling rate (to get the frequencies scaled right).


V = colinterpvals (R, M) interpolates values down the columns of a matrix. R is a set of fractional indices (starting from 1.0, possibly including NaNs for missing points); V is returned as a conformal matrix, with each value the linear interpolation between the bins of the corresponding column of matrix M.


X = synthphtrax (F, M,P, SR, W,H) performs sinusoid resynthesis just like synthtrax, but this version takes a matrix of exact phase values, P, to which the oscillators must conform at each control time. Matching both frequencies (from F) and phases (from P) requires cubic phase interpolation, since frequency is the time-derivative of phase.


Using these pieces, we can make a more accurate sinewave model of the harmonics, and subtract it from the original to cancel the harmonic part:


Modified resynthesis


Now that we have the signal separate into harmonic and noisy parts, we can try modifying them prior to resynthesis. For instance, we could slow down the sound by some factor simply by increasing the hop size used in resynthesis. For the noisy part, we could model it with noise-excited LPC (using the simple lpc analysis routine lpcfit. m and corresponding resynthesis lpcsynth. m), and again double the resynthesis hop size. Let's have a go:


Integrated Sinusoid + Noise Time Scaling


The function y = scalesound (d, sr, f) wraps all the steps above into a single function that performs time-scale modification of a signal d at sampling rate r to make it f times longer in duration:


Further reading


Here's a bunch of pointers to more information on sinewave modeling of sound. Lots of people have pursued this idea in different guises, so this is really just scratching the surface.


My introduction to this idea was via Tom Quatieri. The sinewave modeling system he and Rob McAulay developed is often known as MQ modeling.


Lemur is an MQ analysis-synthesis package out of CERL at Washington State University.


I first came across the idea of treating the noise residual separately in the work of Xavier Serra when he was at Stanford CCRMA. Since then, he's done a great deal with spectral modeling synthesis or SMS.


Harmonic modeling is a popular idea in speech analysis and synthesis. Yannis Stylianou has recently developed a clever variant of harmonic plus noise modeling, used as part of AT&T's latest speech synthesizer.


Descargar


You can download all the code examples mentioned above (and this tutorial) in one compressed directory: sinemodel. zip .


Referencing


If you use this code in your research and would like to acknowledge it (and direct others to it), you could use a reference like this:


Acknowledgment


This project was supported in part by the NSF under grant IIS-0716203. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Sponsors.


Published with MATLAB® 7.11


Media móvil


The moving average is a popular technical indicator which investors use to analyze price trends. It is simply a security's average closing price over the last specified number of days.


How it works (Example):


Some of the most popular moving averages are the 50-day moving average, the 100-day moving average, the 150-day moving average, and the 200-day moving average. The shorter the amound of time covered by the moving average, the shorter the time lag between the signal and the market 's reaction.


You can calculate the moving average for any amount of time. To do so, just pick an amount of time to analyze (we'll use 30 days for this example), and take the average of the security's closing price over the last (30) days [(Day 1 + Day 2 + Day 3 +. + Day 29 + Day 30)/30].


On the surface, it seems as though the higher the moving average goes, the more bullish the market is (and the lower it goes, the more bearish ). In practice, however, the reverse is true. Extremely high readings are a warning that the market may soon reverse to the downside. High readings reveal that traders are far too optimistic. When this occurs, fresh new buyers are often few and far between. Meanwhile, very low readings signify the reverse; the bears are in the ascendancy and a bottom is near. The shorter the moving average, the sooner you'll see a change in the market.


Why it Matters:


The moving average is perceived to be the dividing line between a stock that is technically healthy and one that is not. Furthermore, the percentage of stocks above the moving average helps determine the overall health of the market .


Many market traders also use moving averages to determine profitable entry and exit points into specific securities.


InvestingAnswers is the only financial reference guide you’ll ever need. Our in-depth tools give millions of people across the globe highly detailed and thoroughly explained answers to their most important financial questions.


We provide the most comprehensive and highest quality financial dictionary on the planet, plus thousands of articles, handy calculators, and answers to common financial questions -- all 100% free of charge.


Each month, more than 1 million visitors in 223 countries across the globe turn to InvestingAnswers. com as a trusted source of valuable information.


Market Memory Help and Support


What is a Moving Average?


Moving average is a trend-following technical indicator which plots the average of the price of n-periods of a security price (or other underlying value) over a continuous period. For example, if a 10-day moving average is chosen, the value shown is the average of the 10 days prior. The plot shown is each consecutive 10-day average on all days prior as well. This gives a smoothed appearance to the line.


Moving Average is commonly used in technical trading. It mainly helps traders gauge the major trend directions.


Simple and Exponential Moving Average


Generally, there are two types of moving average, simple moving average (SMA) and exponential moving average (EMA). Moving average calculations for two types of moving average are different.


Simple Moving Average is the arithmetic mean, which gives same weight to each day’s price. Simple moving average formula is simple and straightforward, which is equal to the sum of total closes divided by the number of total days. For example, a 5-day moving average is equal to the sum of last five days’ closes divided by 5.


Exponential Moving Average is a weighted moving average which applies more weights to recent prices. Its calculation is a little bit complex. Exponential moving average formulas are shown as follow.


Current EMA = (current price – previous EMA) * Multiplier + previous EMA.


Multiplier = 2/(1+N) where N = the number of days.


200-day Moving Average and 50-day Moving Average


Moving average filter can be used in different time frames, such as 200-day moving average and 50-day moving average. A 200-day moving average generally indicates a long-term trading and it moves slower and smoother. A 50-day moving average commonly represents an intermediate trading and it moves faster and is more sensitive to price changes.


Moving Average Settings


Market Memory allows users to choose type of moving average, the period of moving average and range by their preferences. The period of moving average is a certain period of time users choose to adjust moving average, such as a 10-day moving average. Range is a relative number to express the number of consecutive days that are above (+) or below (-) the moving average. In other words, the value of range will further filter the days. Several moving average examples are shown below. If range is < 0 then the filter will only accept data that belong to a series of a certain consecutive days below the moving average.


If range is >0 then the filter will only accept data that belong to a series of a certain consecutive days above the moving average.


If range = 0 then the filter will return no data.


If several consecutive days match the criteria you have selected, a slider can be used to limit the number of selected days. If “above for 1 days to infinity” is chosen, it means that filter will select all the dates that closing prices are above the moving average.


If we choose “limit number of days”, filter will select some certain days of selected consecutive days. For example, if “above one days to one days” is set, filter will select only the first day of selected consecutive days.


Impulse Response and Convolution


Digital signal processing is (mostly) applied linear algebra.


The relevance of matrix multiplication turned out to be easy to grasp for color matching. We had fixed dimensions of 1 (number of test lights), 3 (number of primary lights, number of photopigments), and 31 (number of sample points in a spectral power distribution for a light, or in the spectral absorption for a pigment); and it turned out that some important facts about color vision can be modeling as projection of the higher-dimensional spectral vectors into a lower-dimensional psychological subspace. It's also easy to see how this idea works out when we're modeling a relationship between independent variables (like experimental conditions) and dependent variables (like subject responses), or when we're trying to classify sets of multivariate measurements (like formant values).


But what does it mean to interpret processing audio or video signals as matrix multiplication? And why would we want to?


Consider a simple case. The CD standard samples an audio waveform 44,100 times a second, so that a piece lasting 2:48 contains 7,408,800 samples (ignoring the stereo issue). Suppose we want to adjust the relative loudness of low, mid, and high frequencies, to compensate for room acoustics, our speaker system, or our personal taste.


The 7,408,800 samples are elements of a vector; any equalization function (as we'll show later) is linear, and any linear transformation is equivalent to a matrix multiplication; so we can model its effect on one channel of our piece of music as multiplication by a 7,408,800 by 7,408,800 matrix. "All" we have to do is to multiply our 7,408,800-element column vector by this matrix, producing another column vector with the same number of elements -- and this will be our equalized bit of audio. If we wanted to operate on a half-hour recording, the scale of the operation would go up in proportion.


This does not seem like a very practical technique. It is conceptually correct, and sometimes it may be useful to think of things in this way. However, this is (needless to say) not how a DSP implementation of an equalizer is accomplished. There are much easier ways, which are mathematically equivalent for systems with certain properties, whose matrices have corresponding properties that permit simple and efficient implementation of the equivalent calculation.


This topic can be reduced to a slogan:


The effect of any linear, shift-invariant system on an arbitrary input signal is obtained by convolving the input signal with the response of the system to a unit impulse.


To get an idea of what this might be good for, consider some things in the real world that are (or at least can be successfully modeled as) linear shift-invariant systems:


Once you understand the terminology in this slogan, it will be almost immediately obvious that it's true; so in a sense this lecture is mostly a matter of learning some definitions!


We already know what a linear system is. A shift-invariant system is one where shifting the input always shifts the output by the same amount. When we're representing signals by vectors, then a shift means a constant integer added to all indices. Thus shifting vector v by n samples produces a vector w such that w (i+n) = v (i).


Note: there is a little problem here is decided what happens at the edges. Thus for a positive shift n the first element of w should correspond to the minus nth element of v -- but v isn't defined for indices smaller than 1 (or zero, if we decide to start there). There's a similar problem at the other end. Conventional DSP mathematics solves this problem by treating signals as having infinite extent -- defined for all indices from minus infinity to infinity. Real-world signals generally start and stop, however. This is a question we'll return to several times, including once at the end of this lecture, when we'll provide a slightly more formal account in both the EE/DSP perspective and the linear algebra perspective.


For signals that are functions of time -- i. e. where the succession of indices corresponds to a sequence of time points -- a shift-invariant system can equivalently be called a time-invariant system. Here the property of shift-invariance has a particularly intuitive meaning. Suppose we probe some acoustic resonator with a particular input at 12:00 noon on January 25, 1999, and get a response (whatever it is), which we record. Then we probe the same system again with the same input, at 12:00 noon on January 26, 1999. We expect to record the same output -- just shifted forward in time by 24 hours! The same expectation would apply for a time difference of one hour, or one minute. Finally, if we hypothetically delay the input by 1 millisecond, we expect the output to be delayed by the same amount -- and to be otherwise unchanged! The resonator doesn't know what time it is, and responds in the same way regardless of when it is probed.


A unit impulse (for present purposes) is just a vector whose first element is 1, and all of whose other elements are 0. (For the electrical engineer's digital signals of infinite extent, the unit impulse is 1 for index 0 and 0 for all other indices, from minus infinity to infinity).


We'll work up to what convolution is by giving a simple example. Here's a graph of 50 sample (about 6 milliseconds) of a speech waveform.


We're representing this waveform as a sequence of numbers -- a vector -- and from this perspective a more suitable graphical representation of the same data is a "lollipop plot", which shows us each sample as a little "lollipop" sticking up or down from a zero line:


Let's zoom in on just the first six of these numbers:


Matlab will tell us their specific values:


We can think of this six-element vector s as being the sum of six other vectors s1 to s6 . each of which "carries" just one of its values, with all the other values being zero:


Recall that an impulse (in the current context, anyhow) is a vector whose first element has the value 1 and all of whose subsequent elements are zero. The vector we've called s1 is an impulse multiplied by 10622. The vector s2 is an impulse shifted to the right by one element and scaled by 5624. Thus we are decomposing s into a set of scaled and shifted impulses. It should be clear that we can do this to an arbitrary vector.


The same decomposition represented graphically:


Why is this interesting? Well, consider some arbitrary shift-invariant linear system D . Suppose that we apply D (without knowing anything more about it) to an impulse, with the result shown below: the first sample of the output is 1, the second sample is -1, and the rest of the samples are 0. This result is the impulse response of D .


This is enough to predict the result of applying D to our scaled and shifted impulses, s 1. s n. Because D is shift-invariant . the effect of shifting the input is just to shift the output by the same amount. Thus an input consisting of a unit impulse shifted by any arbitrary amount will produce a copy of the impulse response . shifted by that same amount.


We also know that D is linear . and therefore a scaled impulse as input will produce a scaled copy of the impulse response as output.


Using these two facts, we can predict the response of D to each of the scaled and shifted impulses s 1. s n. This is shown graphically below:


If we arrange the responses to s1. s6 as the rows of matrix, the actual numbers will look like this: (The arrangement of these outputs as the rows of a matrix is purely for typographical convenience; also notice that we've allow the response to input s6 to fall off the end of the world, so to speak)


This information, in turn, is enough to let us predict the response of the system D to the original vector s . which (by construction) is just the sum of s1 + s2 + s3 + s4 + s5 + s6 . Since D is linear, applying it to this sum is the same as applying it to the individual components of the sum, and the adding up the results. This is just the sum of the columns of the matrix shown above:


(Matlab "sum", applied to a matrix, produces a row vector of the sums of the columns. )


Notice that (at least for the second position in the sum and onward) this makes the output in position i equal to the difference between the input in position i and the input in position i-1. In other words, D happens to be calculating the first difference of its input.


It should be clear that the same basic procedure will work for ANY shift-invariant linear system, and for ANY input to such a system:


express the input as a sum of scaled and shifted impulses;


calculate the response to each of these by scaling and shifting the system's impulse response;


add up the resulting set of scaled and shifted impulse responses.


This process of adding up a set of scaled and shifted copies of one vector (here the impulse response), using the values of another vector (here the input) as the scaling values, is convolution -- at least this is one way to define it.


Another way: the convolution of two vectors a and b is defined as a vector c . whose kth element is (in MATLAB-ish terms)


(The "+1" in "k+1-j" is due to the fact that MATLAB indices have the bad taste to start from 1 instead of the mathematically more elegant 0).


This formulation helps indicate that we can also think of convolution as a process of taking a running weighted average of a sequence -- that is, each element of the output vector is a linear combination of some of the elements of one of the input vectors-- where the weights are taken from the other input vector.


There are a couple of little problems: how long should c be? and what should we do if k +1- j is negative or greater than the length of b .


These problems are a version of the "edge effects" we've already hinted at, and will see again. One possible solution is to imagine that we are convolving two infinite sequences created by embedding a and b in an ocean of zeros. Now arbitrary index values---negative ones, ones that seemed "too big"---make perfect sense. The value of extended a and extended b for index values outside their actual range is now perfectly well defined: always zero. The result of Equation 1 will be another infinite-length sequence c .


A little thought will convince you that most of c will also be necessarily zero, since the non-zero weights from b and the non-zero elements of a will not coincide in those cases. How many elements of c have a chance to be non-zero? Well, just those integers k for which there is at least one integer j such that 1 <= j <= length( a ) and 1 <= k+1-j <= length( b ). With a little more thought, you can see that this means that the length of c will be one less than the sum of the lengths of a and b .


Referring again to Equation 1, and imagining the two vectors a and b as embedded in their seas of zeros, we can see that we will get the right answer if we allow k to run from 1 to length( a )+length( b )-1, and for each value of k . allow j to run from max(1, k +1-length( b )) to min( k, length( a )). Again, all of this is in MATLAB index terms, and so we can transfer it directly to a MATLAB program myconv() to perform convolution:


This will give us just the piece of the conceptually infinite c that has a chance to be non-zero.


MATLAB has a built-in convolution function conv(), so we can compare the one that we just wrote:


As an aside, we should mention that convolution will also give us the correct results if we think of a, b and c as the coefficients of polynomials, with c being the coefficients of the polynomial resulting from multiplying a and b together. Thus convolution is isomorphic to polynomial multiplication, so that e. g.


can also be interpreted to mean that (2*x + 3)*(4*x + 5) = 8*x^2 + 22*x + 15 and


can also be interpreted to mean that (3*x + 4)*(5*x^2 + 6*x + 7) = 15*x^3 + 38*x^2 + 45*x + 28


If you believe this, it follows immediately from the commutativity of multiplication that convolution also commutes (and is associative, and distributes over addition).


We can exemplify these properties empirically:


These are important points, so if you do not immediately see that they arealways true, spend some time with Equation 1 -- or with the convolution operator in Matlab -- and convince yourself.


We've given two pictures of conv(a, b):


in one, we add up a bunch of scaled and shifted copies of a, each copy scaled by one value of b, and shifted over to align with the location of that value in b.


in the other, we use take a running weighted average of a, taking b (backwards) as the weights.


We can see the relationship between these two pictures by expressing Equation 1 in matrix form. We have been thinking of b as the impulse response of the system, a as the input, and c as the output. This implies that the matrix for S will have dimensions length( c ) by length( a ), if c = S a is to be legal matix-ese.


Each element of the output c will be the inner product of a row of S with the input a . This will be exactly Equation 1 if the row of S in question is just b . time-reversed, shifted, and suitably padded with zeros. As b shifts out of the picture, we just shift in zeros from the "sea of zeros" we imagine ourselves to be floating in.


A small modification of our convolution program will produce the needed matrix:


Thus cmat(a, b) creates a matrix operator C that can be multiplied by the vector a to create exactly the same effect as convolution of a with b:


This works because the rows of C are suitably shifted (backwards-running) copies of b -- or equivalently, because the columns of C are suitably shifted (forwards-running) copies of b .


This gives us the two pictures of convolutional operators:


THE RUNNING WEIGHTED AVERAGE OF THE INPUT: The rows of C are shifted backwards copies of b . and the inner product of each row with a will give us a weighted average of a suitable piece of a . which we stick into the appropriate place in the output c .


THE SUM OF SCALED AND SHIFTED COPIES OF THE IMPULSE RESPONSE: The columns of C are shifted copies of b . Taking the other view of matrix multiplication, namely that the output is the sum of the columns of C weighted by the elements of a . gives us the other picture of convolution, namely adding up a set of scaled and shifted copies of the "impulse response" b .


A larger example:


In working through the details of convolution, we had to deal with the "edge effect": the fact that the convolution equation (Equation 1) implies index values for finite-length inputs a and b outside the range in which they are defined.


Obviously we could choose quite a number of different ways to supply the missing values---the particular choice that we make should depend on what we are doing. There are some cases in which the "sea of zeros" concept is exactly correct.


However, there are alternative situations in which other ideas make more sense. For instance, we might think of b as sitting in a sea of infinitely many repeated copies of itself. Since this means that index values "off the end" of b wrap around to the other end in a modular fashion, just as if b was on a circle, the kind of convolution that results is called "circular convolution."


Keep this in mind: we will come back to it in a later lecture.


Meanwhile, let's repeat the slogan we began with:


The effect of any linear, shift-invariant system on an arbitrary input signal is obtained by convolving the input signal with the response of the system to a unit impulse.


(Notice that this is the same property of linear systems that we observed in the case of color matching -- where we could learn everything we needed to know about the system by probing it with a limited set of "monochromatic" inputs. If the system were only linear, and not shift-invariant, the analogy here would be to probe it with unit impulses at every possible index value -- each such probe giving us one column of the system matrix. That was practical with a 31-element vector, but it would be less attractive with vectors of millions or billions of elements! However, if the system is also shift-invariant, a probe with just one impulse is enough, since the responses of all the shifted cases can be predicted from it.)


Convolution can always be seen as matrix multiplication -- this has to be true, because a system that can be implemented by convolution is a linear system (as well as being shift-invariant). Shift-invariance means that the system matrix has particular redundancies, though.


When the impulse response is of finite duration, this slogan is not only mathematically true, but also is often quite a practical way to implement the system, because we can implement the convolution in a fixed number of multiply-adds per input sample (exactly as many as there are non-zero values in to the system's impulse response). Systems of this type are generally called "finite impulse response" (FIR) filters, or equivalently "moving average" filters. When the impulse response is of infinite duration (as it perfectly well can be in a linear shift-invariant system!), then this slogan remains mathematically true, but is of less practical value (unless the impulse response can be truncated without significant effect). We'll learn later how to implement "infinite impulse response" (IIR) filters efficiently.


The EE/DSP perspective.


The goal of this section is to develop the basic material on impulse response and convolution in the style that is common in the digital signal processing literature in the discipline of Electrical Engineering, so as to help you become familiar with the type of notation that you are likely to encounter there. Also, perhaps going over the same ideas again in a different notation will help you to assimilate thm -- but be careful to keep the DSP/EE notation separate in your mind from linear algebra notation, or you will become very confused!


In this perspective, we treat a digital signal s as an infinitely-long sequence of numbers. We can adapt the mathematical fiction of infinity to everyday finite reality by assuming that all signal values are zero outside of some finite-length sub-sequence. The positions in one of these infinitely-long sequences of numbers are indexed by integers, so that we take s ( n ) to mean ``the nth number in sequence s ,'' usually called `` s of n '' for short. Sometimes we will alternatively use s ( n ) to refer to the entire sequence s . by thinking of n as a free variable.


We will let an index like n range over negative as well as positive integers, and also zero. Así


where the curly braces are a notation meaning ``set,'' so that the whole expression means ``the set of numbers s ( n ) where n takes on all values from minus infinity to infinity.''


We will refer to the individual numbers in a sequence s as elements or samples . The word sample comes from the fact that we usually think of these sequences as discretely-sampled versions of continuous functions, such as the result of sampling an acoustic waveform some finite number of times a second, but in fact nothing that is presented in this section depends on a sequence being anything other than an ordered set of numbers.


The unit impulse or unit sample sequence . written , is a sequence that is one at sample point zero, and zero everywhere else:


The Greek capital sigma, , pronounced sum . is used as a notation for adding up a set of numbers, typically by having some variable take on a specified set of values. Así


is shorthand for


is shorthand for


The notation is particularly helpful in dealing with sums over sequences . in the sense of sequence used in this section, as in the following simple example. The unit step sequence . written u ( n ), is a sequence that is zero at all sample points less than zero, and 1 at all sample points greater than or equal to zero:


The unit step sequence can also be obtained as a cumulative sum of the unit impulse:


Up to n = -1 the sum will be 0, since all the values of for negative n are 0; at n =0 the cumulative sum jumps to 1, since ; and the cumulative sum stays at 1 for all values of n greater than. since all the rest of the values of are 0 again.


This is not a particularly impressive use of the notation, but it should help you to understand that it can be perfectly sensible to talk about infinite sums. Note that we can also express the relationship between u ( n ) and in the other direction:


In general, it is useful to talk about applying the ordinary operations of arithmetic to sequences. Thus we can write the product of sequences x and y as xy . meaning the sequence made up of the products of the corresponding elements ( not the inner product):


Likewise the sum of sequences x and y can be written x + y . sentido


A sequence x can be multiplied by a scalar, with the meaning that each element of x is individually so multiplied:


Finally, a sequence may be shifted by any integer number of sample points:


We already used this notation when we expressed the unit impulse sequence in terms of the unit step sequence, as the difference between a given sample and the immediately previous sample.


Any sequence can be expressed as a sum of scaled and shifted unit samples. Conceptually, this is trivial: we just make, for each sample of the original sequence, a new sequence whose sole non-zero member is that chosen sample, and we add up all these single-sample sequences to make up the original sequence. Each of these single-sample sequences (really, each sequence contains infinitely many samples, but only one of them is non-zero) can in turn be represented as a unit impulse (a sample of value 1 located at point ) scaled by the appropriate value and shifted to the appropriate place. In mathematical language, this is


where k is a variable that picks out each of the original samples, uses its value to scale the unit impulse, and then shifts the result to the position of the selected sample.


A system or transform T maps an input sequence x ( n ) onto an output sequence y ( n ):


Historical Intraday Stock Price Data with Python


By popular request, this post will present how to acquire intraday stock data from google finance using python. The general structure of the code is pretty simple to understand. First a request url is created and sent. The response is then read by python to create an array or matrix of the financial data and a vector of time data. This array is created with the help of the popular Numpy package that can be downloaded from here. Then in one if-statement, the time data is then restructured into a proper unix time format and translated to a more familiar date string for each financial data point. The translated time vector is then joined with the financial array to produce a single easy to work with financial time series array. Since Numpy has been ported to Python 3, the code I wrote should be compatibile with both Python 2.X and 3.X. Here it is: import urllib2 import urllib import numpy as np from datetime import datetime urldata = <> urldata['q'] = ticker = 'JPM' # stock symbol urldata['x'] = 'NYSE' # exchange symbol urldata['i'] = '60' # interval urldata['p'] = '1d' # number of past trading days (max has been 15d) urldata['f'] = 'd, o,h, l,c, v' # requested data d is time, o is open, c is closing, h is high, l is low, v is volume url_values = urllib. urlencode(urldata) url = 'http://www. google. com/finance/getprices' full_url = url + '?' + url_values req = urllib2.Request(full_url) response = urllib2.urlopen(req).readlines() getdata = response del getdata[0:7] numberoflines = len(getdata) returnMat = np. zeros((numberoflines, 5)) timeVector = [] index = 0 for line in getdata: line = line. strip('a') listFromLine = line. split(',') returnMat[index,:] = listFromLine[1:6] timeVector. append(int(listFromLine[0])) index += 1 # convert Unix or epoch time to something more familiar for x in timeVector: if x > 500: z = x timeVector[timeVector. index(x)] = datetime. fromtimestamp(x) else: y = z+x*60 # multiply by interval timeVector[timeVector. index(x)] = datetime. fromtimestamp(y) tdata = np. array(timeVector) time = tdata. reshape((len(tdata),1)) intradata = np. concatenate((time, returnMat), axis=1) # array of all data with the properly formated times


Here is a quick example of what can be done with this data in Python 2.7 using the Numpy and matplotlib libraries. It is nothing fancy, just the usual moving average computations and some styling preferences.


The python code that was used to create the plot above can is posted below: ### Example on how to use and plot this data import matplotlib. pyplot as plt import matplotlib. font_manager as font_manager import matplotlib. dates as mdates import matplotlib. ticker as mticker import numpy as np def relative_strength(prices, n=14): deltas = np. diff(prices) seed = deltas[:n+1] up = seed[seed>=0].sum()/n down = - seed[seed 0: upval = delta downval = 0. else: upval = 0. downval = - delta up = (up*(n-1) + upval)/n down = (down*(n-1) + downval)/n rs = up/down rsi[i] = 100. - 100./(1.+rs) return rsi def moving_average(p, n, type='simple'): """ compute an n period moving average. type is 'simple' | 'exponential' """ p = np. asarray(p) if type=='simple': weights = np. ones(n) else: weights = np. exp(np. linspace(-1. 0. n)) weights /= weights. sum() a = np. convolve(p, weights, mode='full')[:len(p)] a[:n] = a[n] return a def moving_average_convergence(p, nslow=26, nfast=12): """ compute the MACD (Moving Average Convergence/Divergence) using a fast and slow exponential moving avg' return value is emaslow, emafast, macd which are len(p) arrays """ emaslow = moving_average(p, nslow, type='exponential') emafast = moving_average(p, nfast, type='exponential') return emaslow, emafast, emafast - emaslow plt. rc('axes', grid=True) plt. rc('grid', color='0.75', linestyle='-', linewidth=0.5) textsize = 9 left, width = 0.1, 0.8 rect1 = [left, 0.7, width, 0.2] rect2 = [left, 0.3, width, 0.4] rect3 = [left, 0.1, width, 0.2] fig = plt. figure(facecolor='white') axescolor = '#f6f6f6' # the axies background color ax1 = fig. add_axes(rect1, axisbg=axescolor) #left, bottom, width, height ax2 = fig. add_axes(rect2, axisbg=axescolor, sharex=ax1) ax2t = ax2.twinx() ax3 = fig. add_axes(rect3, axisbg=axescolor, sharex=ax1) ### plot the relative strength indicator prices = intradata[:,4] rsi = relative_strength(prices) t = intradata[:,0] fillcolor = 'darkgoldenrod' ax1.plot(t, rsi, color=fillcolor) ax1.axhline(70, color=fillcolor) ax1.axhline(30, color=fillcolor) ax1.fill_between(t, rsi, 70, where=(rsi>=70), facecolor=fillcolor, edgecolor=fillcolor) ax1.fill_between(t, rsi, 30, where=(rsi 70 = overbought', va='top', transform=ax1.transAxes, fontsize=textsize) ax1.text(0.6, 0.1, ' 0 ax2.vlines(t[up], low[up], high[up], color='black', label='_nolegend_') ax2.vlines(t[


up], color='black', label='_nolegend_') ma5 = moving_average(prices, 5, type='simple') ma50 = moving_average(prices, 50, type='simple') linema5, = ax2.plot(t, ma5, color='blue', lw=2, label='MA (5)') linema50, = ax2.plot(t, ma50, color='red', lw=2, label='MA (50)') props = font_manager. FontProperties(size=10) leg = ax2.legend(loc='center left', shadow=True, fancybox=True, prop=props) leg. get_frame().set_alpha(0.5) volume = (intradata[:,4]*intradata[:,5])/1e6 # dollar volume in millions vmax = volume. max() poly = ax2t. fill_between(t, volume, 0, label='Volume', facecolor=fillcolor, edgecolor=fillcolor) ax2t. set_ylim(0, 5*vmax) ax2t. set_yticks([]) ### compute the MACD indicator fillcolor = 'darkslategrey' nslow = 26 nfast = 12 nema = 9 emaslow, emafast, macd = moving_average_convergence(prices, nslow=nslow, nfast=nfast) ema9 = moving_average(macd, nema, type='exponential') ax3.plot(t, macd, color='black', lw=2) ax3.plot(t, ema9, color='blue', lw=1) ax3.fill_between(t, macd-ema9, 0, alpha=0.5, facecolor=fillcolor, edgecolor=fillcolor) ax3.text(0.025, 0.95, 'MACD (%d, %d, %d)'%(nfast, nslow, nema), va='top', transform=ax3.transAxes, fontsize=textsize) # turn off upper axis tick labels, rotate the lower ones, etc for ax in ax1, ax2, ax2t, ax3: if ax!=ax3: for label in ax. get_xticklabels(): label. set_visible(False) else: for label in ax. get_xticklabels(): label. set_rotation(30) label. set_horizontalalignment('right') ax. fmt_xdata = mdates. DateFormatter('%Y-%m-%d') class MyLocator(mticker. MaxNLocator): def __init__(self, *args, **kwargs): mticker. MaxNLocator.__init__(self, *args, **kwargs) def __call__(self, *args, **kwargs): return mticker. MaxNLocator.__call__(self, *args, **kwargs) # at most 5 ticks, pruning the upper and lower so they don't overlap # with other ticks ax2.yaxis. set_major_locator(MyLocator(5, prune='both')) ax3.yaxis. set_major_locator(MyLocator(5, prune='both')) plt. show()


Hi, getting error with fill_between()


Traceback (most recent call last): File "first. py", line 148, in ax1.fill_between(t, rsi, 70, where=(rsi>=70), facecolor=fillcolor, edgecolor=fillcolor) File "/usr/lib/python2.7/site-packages/matplotlib/axes. py", line 6777, in fill_between y1 = ma. masked_invalid(self. convert_yunits(y1)) File "/usr/lib/python2.7/site-packages/numpy/ma/core. py", line 2241, in masked_invalid condition =


(np. isfinite(a)) TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''


could you check?


Thanks for sharing such a wonderful piece of information, really appreciate. Keep informing. Forex Trading tips


About Me


Tabbing Widget


Popular Posts


By popular request, this post will present how to acquire intraday stock data from google finance using python. The general structure of the.


A friend of mine asked me to create a LaTeX template for lecture notes. He wanted to learn LaTeX by typesetting all of his class notes. I th.


This is an extension of my first post on this blog. My first post went over how to gather the minute to minute stock price data from Google.


I was pretty excited to hear about the release of a Sonic Screwdriver remote. But after watching the YouTube demo, it seems kind of lame. T.


This is a bit of a side note from my attempts to model the number of daily crimes in the city of Chicago. I've been treating the Chicago.


I recently tried to find a latex business card template that was similar in design to the cards seen in American Psycho.  I thought Patrick.


Here is a quick tip for those of you who use MacPorts and want pretty syntax highlighting in your LaTeX documents. Recently I discovered a g.


* Imagine you have data on prices for many products.


* For each of the products you record weekly price information.


clear set obs 200


gen prod_id = _n


* Each product has a unique average price gen prod_price = rpoisson(5)/7


* You have data on weekly prices for 200 weeks. expand 200 bysort prod_id: gen t=_n label var t "Week"


* There is also some seasonal variation gen seasonal = .2*sin(_pi*t/50)


* As well as a general time trend gen trend = t*.005


* The first observation is not correlated with anything gen price = prod_price*2.5 + trend + rpoisson(10)/10 if t==1 replace price = prod_price*2 + trend + seasonal + .7*price[_n-1] + .3*rpoisson(10)/10 if t==2 replace price = prod_price + trend + seasonal + .5*price[_n-1] + .2*price[_n-2] + .3*rpoisson(10)/10 if t==3 replace price = prod_price + trend + seasonal + .3*price[_n-1] + .2*price[_n-2] + .2*price[_n-3] + .3*rpoisson(10)/10 if t==4 replace price = prod_price + trend + seasonal + .3*price[_n-1] + .175*price[_n-2] + .125*price[_n-3] + .1*price[_n-4] +.3*rpoisson(10)/10 if t>4


* Create a globabl to store global twograph


forv i = 1/6 global twograph $ (line price t if prod_id == `i') >


twoway $twograph, legend(off) title(True price trends for first six products)


* Now let's imagine that the above generated data is the true price information which is fundamentally unobservable.


* Instead you have multiple collections of data per week on prices which each vary by some random addative error. expand 3


bysort prod_id t: gen prod_obs = _n


gen price_collect = price + rnormal()*.25


* However the price information that you have has some entries that 10% have been mistakenly entered wrong.


gen entry_error = rbinomial(1,.1) gen scalar_error = rnormal()+1


gen price_obs = price_collect*(1+entry_error*scalar_error) label var price_obs "Recorded Price"


* In addition, 35% of your price data was never collected gen missing = rbinomial(1,.35)


drop if missing==1


* Create a globabl to store global twograph


forv i = 1/6 global twograph $ (line price_obs t if prod_id == `i' & prod_obs==1) >


twoway $twograph, legend(off) title(Observed price trends for first six products)


keep t price_obs prod_id entry_error * I am keeping entry error in the data set as a means of comparison though it would not be directly observed.


* The question is:


* Can you now with this messy data recover price data that is similar to the original?


* The first thing that we should exploit is the duplicate recorded data.


scatter price_obs t if prod_id == 1, title(It is easy to see individual deviations)


* It is easy to see individual deviations but we do not want to go through all 200 products to identify individually price outliers. * We want to come up with a system to identify outliers.


* Let's generate a mean by product and time bysort prod_id t: egen price_mean = mean(price_obs)


* Let's flag any observation that is 120% greater than the mean or 80% less than the mean. gen flag = (price_mean > price_obs*1.2 | price_mean < price_obs*.8)


* Let's see how it is working: two (scatter price_obs t if prod_id == 1) /// (scatter price_obs t if prod_id == 1 & flag==1. msymbol(lgx)) /// . title(Some of outliers can be identified just looking at the mean) legend(off)


corr flag entry_error * Our flag is correlated about 45% with the entry errors. This is good but we can do better.


* I propose that rather than using just the mean that we construct a moving average of prices and see how each entry deviates from the average. * The only problem is that the moving average command requires xtset and that requires only one entry per time period. * So, I say we rescale the time variable and add in as if recorded at a different time of the week the observation number.


* We need to newly generate prod_obs since we do not know which observation is missing from each product. bysort prod_id t: gen prod_obs = _n


gen t2 = t*4 + prod_obs


* xtset sets the panel data panel id and time series level. xtset prod_id t2


* The command we will be using is "tssmooth"


* It is coded such that by specifying ma it means moving average and window tells Stata how many time periods to count ahead and how many behind in the moving aerage. * This command can take a little while. tssmooth ma ma_price_obs=price_obs, window(23 0 23) * 23 is in effect 5 weeks ahead and 5 weeks behind * The 0 tells stata not to include inself in that average


* The moving average two (scatter price_obs t if prod_id == 1) /// (line ma_price_obs t if prod_id == 1) /// (line price_mean t if prod_id == 1) /// . title(The Moving Average is less succeptable to outliers)


* The moving average is more stable than just the time average.


* Let's try flagging using the moving average cap drop flag2 gen flag2 = (ma_price_obs > price_obs*1.2 | ma_price_obs < price_obs*.8)


two (scatter price_obs t if prod_id == 1) /// (scatter price_obs t if prod_id == 1 & flag2==1. msymbol(lgx)) /// . title(The Moving Average can also be useful) legend(off)


corr flag2 entry_error


* Drop our flagged data drop if flag2==1


* Collapse to the weekly level collapse price_obs, by(prod_id t) label var price_obs "Mean price observed"


forv i = 1/6 global twograph $ (scatter price_obs t if prod_id == `i') >


twoway $twograph, legend(off) title(Observed price trends for first six products) * The data is looking a lot better but we still clearly have some unwanted outliers.


* We could take advantage of the cross product trends to help identify outliers within product prices. bysort t: egen ave_price = mean(price_obs)


reg price_obs ave_price if prod_id == 1 predict resid1, residual


reg price_obs ave_price if prod_id == 2 predict resid2, residual


reg price_obs ave_price if prod_id == 3 predict resid3, residual


twoway (line resid1 t if prod_id == 1) /// (line price_obs t if prod_id == 1) /// (line resid2 t if prod_id == 2) /// (line price_obs t if prod_id == 2) /// (line resid3 t if prod_id == 3) /// (line price_obs t if prod_id == 3). title(The residuals are clear indicators of outliers) legend(off)


* Finally, let us drop observations with residuals that are greater than 1.5 standard deviations from the mean.


qui forv i=1/200 reg price_obs ave_price if prod_id == `i' predict resid_temp, residual sum resid_temp replace flag = ((resid_temp-r(mean)>r(sd)*1.5 | resid_temp-r(mean) drop resid_temp >


* Let's see how it is working: two (scatter price_obs t if prod_id == 2) /// (scatter price_obs t if prod_id == 2 & flag==1. msymbol(lgx)) /// . title(Now just trying remove some final outliers) legend(off)


* Plotting product 1 pricing relative to outliers. global twograph


forv i = 1/6 global twograph $ (line price_obs t if prod_id == `i') >


* Finally dropping the outliers drop if flag


* One final graph global twograph


forv i = 1/6 global twograph $ (scatter price_obs t if prod_id == `i') >


twoway $twograph, legend(off) title(Observed price trends for first six products)


* Not as clean as our first graph but definitely much improved.


Re: error using the moving average function


From . "Oleg Komarov" <oleg. komarovRemove. this@xxxxxxxxxx >


Fecha . Thu, 14 Jan 2010 15:48:04 +0000 (UTC)


"Wayne King" & Gt; Loren Shure > & Gt; & Gt; & Gt; & Gt; & Gt; Px = timeseries(rand(100,1)); & Gt; & Gt; & Gt; size(Px) > & Gt; & Gt; ans = > & Gt; & Gt; 100 1 > & Gt; & Gt; Px = timeseries(rand(1,100)); & Gt; & Gt; & Gt; size(Px) > & Gt; & Gt; ans = > & Gt; & Gt; 100 1 > & Gt; & Gt; & Gt; & Gt; & Gt; So i think there is a bug. & Gt; & Gt; & Gt; If TMW can confirm. & Gt; & Gt; & Gt; & Gt; & Gt; & Gt; Oleg > & Gt; & Gt; & Gt; & Gt; & Gt; & Gt; The first input to tsmovavg needs to be a timeseries object, not a > & Gt; regular vector as Px seems to be. You'd need to place/convert Px to the > & Gt; right class. & Gt; & Gt; & Gt; & Gt; -- > & Gt; Loren > & Gt; http://blogs. mathworks. com/loren > & Gt; But Px is a timeseries obj: > & Gt; & Gt; & gt; Px = timeseries(rand(100,1)); & Gt; & Gt; & gt; isafin(Px,'timeseries') > ans = > 1 > & Gt; & gt; size(Px) > ans = > 100 1 > & Gt; & gt; size(Px.') > ans = > 100 1 > & Gt; & gt; output = tsmovavg(Px, 's',30, 1) >. Error using ==> tsmovavg>simplema at 238 > Lag must be scalar greater than 0 or less than the number of observations. & Gt; & Gt; Error in ==> tsmovavg at 98 > vout = simplema(vin, lag, vinVars, observ); & Gt; & Gt; As you can see the size function doesn't recognize more than 1 observations wheter the ts obj is a col vec or a row vec. & Gt; & Gt; Oleg


Hi Oleg (and others), it looks to me that tsmovavg() is designed to take either a vector (again it defaults to operate along the 2nd dimension, but can work on column vectors by specifying the dim argument as 1), or as an overloaded method to work on fints object (financial time series object). So that any of the following will work:


x = randn(100,1); output = tsmovavg(x,'s',10,1); output = tsmovavg(x','s',10); dates =[today:today+99]'; tsobj = fints(dates, x); output = tsmovavg(tsobj,'s',10);


Wayne You're right. I was misled by its name. tsmovavvg is listed under the Financial toolbox while timeseries is included in the stadard package.


Anyway, the point is that b = timeseries(rand(5, 4),'Name','LaunchData'); % example 1 from timeseries size(b) ans = 5 1


Should it be that way?


Relevant Pages


Re: error using the moving average function . & Gt; & gt; Loren Shure . >>> The first input to tsmovavg needs to be a timeseries object, . & Gt; & gt; But Px is a timeseries obj: . & Gt; & gt; ans = . (comp. soft-sys. matlab)


Re: dates . So I made my own timeseries using the following. . Are you upset you are one second short? . Is it anything to do with indexing being off by one somewhere? . (comp. soft-sys. matlab)


Re: Loop throgh a matrix . "Oleg Komarov"> "Matt J " wrote in message . & Gt; & gt; ans = . (comp. soft-sys. matlab)


UNIT 40 - SPATIAL INTERPOLATION I


Compiled with assistance from Nigel M. Waters, University of Calgary


spatial interpolation is the procedure of estimating the value of properties at unsampled sites within the area covered by existing observations


in almost all cases the property must be interval or ratio scaled


can be thought of as the reverse of the process used to select the few points from a DEM which accurately represent the surface


rationale behind spatial interpolation is the observation that points close together in space are more likely to have similar values than points far apart (Tobler's Law of Geography)


spatial interpolation is a very important feature of many GISs


spatial interpolation may be used in GISs:


to provide contours for displaying data graphically


to calculate some property of the surface at a given point


to change the unit of comparison when using different data structures in different layers


frequently is used as an aid in the spatial decision making process both in physical and human geography and in related disciplines such as mineral prospecting and hydrocarbon exploration


many of the techniques of spatial interpolation are two - dimensional developments of the one dimensional methods originally developed for time series analysis


this unit introduces spatial interpolation and examines point based interpolation, while the next looks at areal procedures and some applications


there are several different ways to classify spatial interpolation procedures:


given a number of points whose locations and values are known, determine the values of other points at predetermined locations


point interpolation is used for data which can be collected at point locations e. g. weather station readings, spot heights, oil well readings, porosity measurements


interpolated grid points are often used as the data input to computer contouring algorithms


once the grid of points has been determined, isolines (e. g. contours) can be threaded between them using a linear interpolation on the straight line between each pair of grid points


point to point interpolation is the most frequently performed type of spatial interpolation done in GIS


lines to points


p. ej. contours to elevation grids


areal interpolation


given a set of data mapped on one set of source zones determine the values of the data for a different set of target zones


p. ej. given population counts for census tracts, estimate populations for electoral districts


global interpolators determine a single function which is mapped across the whole region


a change in one input value affects the entire map


local interpolators apply an algorithm repeatedly to a small portion of the total set of points


a change in an input value only affects the result within the window


global algorithms tend to produce smoother surfaces with less abrupt changes


are used when there is an hypothesis about the form of the surface, e. g. a trend


some local interpolators may be extended to include a large proportion of the data points in set, thus making them in a sense global


the distinction between global and local interpolators is thus a continuum and not a dichotomy


this has led to some confusion and controversy in the literature


exact interpolators honor the data points upon which the interpolation is based


the surface passes through all points whose values are known


honoring data points is seen as an important feature in many applications e. g. La industria petrolera


proximal interpolators, B-splines and Kriging methods all honor the given data points


Kriging, as discussed below, may incorporate a nugget effect and if this is the case the concept of an exact interpolator ceases to be appropriate


approximate interpolators are used when there is some uncertainty about the given surface values


this utilizes the belief that in many data sets there are global trends, which vary slowly, overlain by local fluctuations, which vary rapidly and produce uncertainty (error) in the recorded values


the effect of smoothing will therefore be to reduce the effects of error on the resulting surface


stochastic methods incorporate the concept of randomness


the interpolated surface is conceptualized as one of many that might have been observed, all of which could have produced the known data points


stochastic interpolators include trend surface analysis, Fourier analysis and Kriging


procedures such as trend surface analysis allow the statistical significance of the surface and uncertainty of the predicted values to be calculated


deterministic methods do not use probability theory (e. g. proximal)


a typical example of a gradual interpolater is the distance weighted moving average


usually produces an interpolated surface with gradual changes


however, if the number of points used in the moving average is reduced to a small number, or even one, there would be abrupt changes in the surface


it may be necessary to include barriers in the interpolation process


semipermeable, e. g. weather fronts


will produce quickly changing but continuous values


impermeable barriers, e. g. geologic faults


will produce abrupt changes


Lam (1983) and Burrough (1986) describe a variety of quantitative interpolation methods suitable for computer contouring algorithms


in this and the next sections, these are divided into exact and approximate methods


this section deals with exact methods


all values are assumed to be equal to the nearest known point


is a local interpolator


computing load is relatively light


output data structure is Thiessen polygons with abrupt changes at boundaries


has ecological applications such as territories and influence zones


best for nominal data although originally used by Thiessen for computing areal estimates from rainfall data


is absolutely robust, always produces a result, but has no "intelligence" about the system being analyzed


available in very few mapping packages, SYMAP is a notable exception


uses a piecewise polynomial to provide a series of patches resulting in a surface that has continuous first and second derivatives


ensures continuity in:


elevation (zero-order continuity) - surface has no cliffs


slope (first-order continuity) - slopes do not change abruptly, there are no kinks in contours


curvature (second order continuity) - minimum curvature is achieved


produces a continuous surface with minimum curvature


output data structure is points on a raster


note that maxima and minima do not necessarily occur at the data points


is a local interpolator


can be exact or used to smooth surfaces


computing load is moderate


best for very smooth surfaces


poor for surfaces which show marked fluctuations, this can cause wild oscillations in the spline


are popular in general surface interpolation packages but are not common in GISs


can be approximated by smoothing contours drawn through a TIN model


see Burrough (1986), Davis (1986) and mathematical aspects in Lam (1983) and Hearn and Baker (1986)


also described in "numerical approximation theory"


developed by Georges Matheron, as the "theory of regionalized variables", and D. G. Krige as an optimal method of interpolation for use in the mining industry


the basis of this technique is the rate at which the variance between points changes over space


this is expressed in the variogram which shows how the average difference between values at points changes with distance between points


De (vertical axis) is E(zi - zj)2, i. e. "expectation" of the difference


i. e. the average difference in elevation of any two points distance d apart


d (horizontal axis) is distance between i and j


most variograms show behavior like the diagram


the upper limit (asymptote) of De is called the sill


the distance at which this limit is reached is called the range


the intersection with the y axis is called the nugget


a non-zero nugget indicates that repeated measurements at the same point yield different values


in developing the variogram it is necessary to make some assumptions about the nature of the observed variation on the surface:


simple Kriging assumes that the surface has a constant mean, no underlying trend and that all variation is statistical


universal Kriging assumes that there is a deterministic trend in the surface that underlies the statistical variation


in either case, once trends have been accounted for (or assumed not to exist), all other variation is assumed to be a function of distance


the input data for Kriging is usually an irregularly spaced sample of points


to compute a variogram we need to determine how variance increases with distance


begin by dividing the range of distance into a set of discrete intervals, e. g. 10 intervals between distance 0 and the maximum distance in the study area


for every pair of points, compute distance and the squared difference in z values


assign each pair to one of the distance ranges, and accumulate total variance in each range


after every pair has been used (or a sample of pairs in a large dataset) compute the average variance in each distance range


plot this value at the midpoint distance of each range


once the variogram has been developed, it is used to estimate distance weights for interpolation


interpolated values are the sum of the weighted values of some number of known points where weights depend on the distance between the interpolated and known points


weights are selected so that the estimates are:


unbiased (if used repeatedly, Kriging would give the correct result on average)


minimum variance (variation between repeated estimates is minimum


problems with this method:


when the number of data points is large this technique is computationally very intensive


the estimation of the variogram is not simple, no one technique is best


since there are several crucial assumptions that must be made about the statistical nature of the


variation, results from this technique can never be absolute


simple Kriging routines are available in the Surface II package (Kansas Geological Survey) and Surfer (Golden Software), and in the GEOEAS package for the PC developed by the US Environmental Protection Agency


traditionally not a highly regarded method among geographers and cartographers


however, Dutton-Marion (1988) has shown that among geologists this is a very important procedure and that most geologists actually distrust the more sophisticated, mathematical algorithms


they feel that they can use their expert knowledge, modelling capabilities and experience and generate a more realistic interpolation by integrating this knowledge into the construction of the geological surface


attempts are now being made to use knowledge engineering techniques to extract this knowledge from experts and build it into an expert system for interpolation


see Unit 74 for more on this topic


characteristics of this method include:


procedures are local as different methods may be used by the expert on different parts of the map


tend to honor data points


abrupt changes such as faults are more easily modelled using these methods


the surfaces are subjective and vary from expert to expert


output data structure is usually in the form of a contour


surface is approximated by a polynomial


output data structure is a polynomial function which can be used to estimate values of grid points on a raster or the value at any location


the elevation z at any point (x, y) on the surface is given by an equation in powers of x and y


p. ej. a linear equation (degree 1) describes a tilted plane surface:


p. ej. a quadratic equation (degree 2) describes a simple hill or valley:


z = a + bx + cy + dx2 + exy + fy2


in general, any cross-section of a surface of degree n can have at most n-1 alternating maxima and minima


p. ej. a cubic surface can have one maximum and one minimum in any cross-section


equation for the cubic surface:


z = a + bx + cy + dx2 + exy + fy2 + gx3 + hx2y + ixy2 + jy3


a trend surface is a global interpolator


assumes the general trend of the surface is independent of random errors found at each sampled point


computing load is relatively light


problems


statistical assumptions of the model are rarely met in practice


edge effects may be severe


a polynomial model produces a rounded surface


this is rarely the case in many human and physical applications


available in a great many mapping packages


see Davis (1973) and Sampson (1978) for non - orthogonal polynomials; Mather (1976) for orthogonal polynomials


approximates the surface by overlaying a series of sine and cosine waves


a global interpolator


computing load is moderate


output data structure is the Fourier series which can be used to estimate grid values for a raster or at any point


best for data sets which exhibit marked periodicity, such as ocean waves


rarely incorporated in computing packages


simple program and discussion in Davis (1973)


estimates are averages of the values at n known points:


where w is some function of distance, such as:


an almost infinite variety of algorithms may be used, variations include:


the nature of the distance function


varying the number of points used


the direction from which they are selected


is the most widely used method


objections to this method arise from the fact that the range of interpolated values is limited by the range of the data


no interpolated value will be outside the observed range of z values


other problems include:


how many points should be included in the averaging?


what to do about irregularly spaced points?


how to deal with edge effects?


Burrough, P. A. 1986. Principles of Geographical Information Systems for land Resources Assessment, Clarendon, Oxford. See Chapter 8.


Davis, J. C. 1986. Statistics and Data Analysis in Geology, 2nd edition, Wiley, New York. (Also see the first, 1973, edition for program listings.)


Dutton-Marion, K. E. 1988. Principles of Interpolation Procedures in the Display and Analysis of Spatial Data: A Comparative Analysis of Conceptual and Computer Contouring, unpublished Ph. D. Thesis, Department of Geography, University of Calgary, Calgary, Alberta.


Hearn, D. and Baker, M. P. 1986. Computer Graphics, Prentice-Hall Inc, Englewood Cliffs, N. J.


Jones, T. A. Hamilton, D. E. and Johnson, C. R. 1986. Contouring Geologic Surfaces with the Computer, Van Nostrand Reinhold, New York


Lam, N. 1983. "Spatial Interpolation Methods: A Review," The American Cartographer 10(2):129-149.


Mather, P. M. 1976. Computational Methods of Multivariate Analysis in Physical Geography, Wigley, New York.


Sampson, R. J. 1978. Surface II, revised edition, Kansas Geological Survey, Lawrence, Kansas.


Waters, N. M. 1988. "Expert Systems and Systems of Experts," Chapter 12 in W. J. Coffey, ed. Geographical Systems and Systems of Geography: Essays in Honour of William Warntz, Department of Geography, University of Western Ontario, London, Ontario.


An important class of interpolation methods is missing here - so called radial basis functions, such as multiquadrics, thin plate spline, thin plate spline with tension, regularized spline with tension and a large number of other flavours of this approach (also sometimes refered to as variational approach). These methods are available in almost every GIS, from ArcINFO, GRASS, SURFER to specialized visualization packages. The description can be found at Mitas, L. Mitasova, H. 1999, Spatial Interpolation. In: P. Longley, M. F. Goodchild, D. J. Maguire, D. W.Rhind (Eds.), Geographical Information Systems: Principles, Techniques, Management and Applications, GeoInformation International, Wiley, 481-492.


1. Are there other techniques for surface generation? How many of the above procedures are commonly used? How would they be ranked in terms of popularity? Give examples from the literature of where they have been used.


2. How does hand contouring rate as an alternative? What did you think of it and have you changed your mind? What are the key features and processes involved in hand contouring?


3. Explain the advantages and disadvantages of manual interpolation as used in hand contouring over computer based interpolation as used in a computer contouring package.


4. Describe the different ways in which spatial interpolation algorithms can be classified.


Back to Geography 470 Home Page


Please send comments regarding content to: Brian Klinkenberg Please send comments regarding web-site problems to: The Techmaster Last Updated: August 30, 1997.


I propose the following algorithm for calculating the average of a set of directions, measured from 0 to 360 degrees:


1. Find the average and standard deviation of the given numbers. 2. Increase the smallest number by 360. 3. Repeat steps 1 and 2 until all numbers are greater than 360. 4. Choose the average that yields the smallest standard deviation. 5. If the average is greater than 360 then subtract 360.


I suggest to devide them by 2 axis X for Cos a & Y for Sin a. We can make average value for each axis X & Y. Then we can find the final number of average value of the wind direction. -- Romeli Electrical engineer PT. Smelting Affiliate of Mitsubishi Materials phone: 62-31-3976464 fax: 62-31-3976466 www. smelting. co. id


You have circular data that you must apply circular mathematics.


In first step, you should calculate sum of sin(theta) and cos (theta), where theta is your wind direction. In second step take atan2[sum(sin(theta),sum(cos(theta)]), represents the average wind direction. But the answer is positively defined, however this type of wind direction traditionally does not used in meteorology, since 0 (or 360) indicates Northly but in mathematics it is 90 degree; hence you should convert to meteorological wind directions.


Dr. Hasan TATLI (Canakkale Onsekiz Mart University, Dept. of Geography, Turkey)


Sorry to disrupt the party, but I'm pretty sure most answers to this posting are not correct.


Averaging x, y-directions, sines of angles. (e. g. obtained after atan()-calls) is bound to fail because it will produce biased results.


What should be done is computing the first eigen-vector of the direction covariance-matrix. Here is how to do it:


1. Stack all n directions (normalised wind-vectors) in an (n x 2)-matrix M (i. e. 1 direction per row).


2. Compute the direction covariance matrix as follows: C = M' * M where M' is the transpose of M and * is the regular matrix-product. C should now be a (2 x 2)-matrix.


3. Compute the first eigen-vector of this matrix. This is the dominant direction.


Notice that this procedure does not take the magnitude of each direction into account, i. e. the dominant direction is not biased towards the direction(s) with largest wind-magnitude. If you also want to take this into account, just omit the normalisation of wind-vectors when computing C.


I tried some examples using your eigen-vector method and using the method I proposed:


atan2(sum of sines, sum of cosines)


and I got the same answer. Can you give a simple example where the eigen-vector method gives a different result from mine? (See my posting above in this thread).


Robert Scott Real-Time Specialties Embedded Systems Consulting


I am working on the effectiveness of windbreaks, and accurate wind direction is very important to me. I have a huge data set collected at 15 s, 30 s or 1 min interval, and I want to calculate hourly averages. I went through the posting, and looks like it will take me a lot of time if I follow the steps, and I didn't find any definite solution.


Is there a difference in methods for real-time processing (while the data is collected) and post-processing (after the data is collected) to calculate average? I checked the sensor documentation and it says it uses vector components to calculate real-time averages? What about using software that come with the weather station? I know it will generate average too. Any suggestion will be highly appreciated.


If your sensor or weather station software uses vector components to calculate real-time averages, then it is doing it correctly. If you can get the sensor or the weather station software to create the hourly averages for you, then do so. If you need to process the raw data yourself, then just do the same thing. Break every individual reading (which is a vector) into North and East components. Form the averages of the North and East components. The hourly average is the vector created from the average North and average East components. It does not matter if the data is being analyzed in real-time or after the fact. The calculation is the same.


Robert Scott Real-Time Specialties Embedded Systems Consulting


IMO, Bob Peterson has the best solution (and it's one of the simplest approaches, too).


But I would probably tweak his approach.


Because the only problem with his solution is that 1/2 hour of wind due north at 50 mph followed by 1/2 hour of wind due south at 50 mph would yield an average wind velocity of 0 mph. I'm guessing you would probably prefer the average value to equal 50 mph.


What I would do is this.


To computer average direction, convert each direction sample into a UNITY vector and add them up and divide by the number of samples.


To obtain the average velocity, add together the absolute value of all the velocity samples and divide by the number of samples.


Hope this helps!


Averaging unity direction vectors can also be misleading. Suppose the wind is nearly calm for much of the day, but when it blows hard, it always blows from the West (270 degrees). But when the wind is nearly calm, it is mostly from the East (090 degrees). All those easterly unity vectors will overwhelm the westerly ones. I'm not sure you want to give such weight to vectors that were nearly zero in magnitude.


The website you cited:


confirms the method that I outlined in my 20 May 2005 posting above.


Robert Scott Real-Time Specialties Embedded Systems Consulting


The following site explains meteorological data processing including wind direction and speed. I suspect it may well not have been available at the time of the initial posting mind you.


I've written the following program in C. Can you tell me if it works correctly for your problem?


#include <stdio. h> #include <stdlib. h> #include <math. h>


static float angle_0to360(float theta) int k = (int)(theta / 360);


theta = theta - k * 360; if (theta < 0) theta = theta + 360; > return theta; >


static float new_angle_mean(float old_theta, float new_theta, int old_num) float mean_theta; float dtheta;


if (old_theta == new_theta) return old_theta; >


dtheta = fabsf(old_theta - new_theta); if (dtheta > 180) old_theta = (old_theta > 180). old_theta - 360: old_theta; new_theta = (new_theta > 180). new_theta - 360: new_theta; >


mean_theta = (old_theta*old_num + new_theta) / (old_num + 1.);


int main() int i, num; float *theta; float mean_theta = 0.;


printf("give number of angles: "); scanf("%d", &num);


theta = malloc(num * sizeof(float));


printf("give %d angles: ", num);


for (i = 0; i < num; i++) scanf("%f", θ[i]); theta[i] = angle_0to360(theta[i]); mean_theta = new_angle_mean(mean_theta, theta[i], i); >


printf("\nmean_theta = %f\n\n", mean_theta);


& Gt; Minor corrections:


This is a very simple mathematical and meteorological issue (actually page 2 of "Meteorology for Scientists and Engineers" by Roland B Stull..


Note: time period can be any time period (this example is hourly)


*** Matlab code below


For each ten minute wind speed measurement the u - and the v-component of the wind need to be calculated in order to average wind direction.


Hi just wondering why there is -1 in calculating the U and V components?


I've forgotton all the mathematics I ever learned relating to this sort of thing long ago.


So, here's what I got to work for me using a couple columns of logic (no radians, sin or vector required):


Seen all the replies to this post. If anyone is still interested the following VB6 code


Basically we convert all wind directions into their rectangular co-ordinates. Next we sum and average the vertical and horizontal components and from these averages we use the ArcTan function to work out the horizontal angle of the average wind. Knowing the sense of the horizontal & vertical components we can establish in which of the 4 quadrants the wind vector is and convert the horizontal angle to a 0


359deg wind angle.


I have tested his code and it works fine across all quadrants giving accurate results.


Your use of this site is subject to the terms and conditions set forth under Legal Notices and the Privacy Policy. Please read those terms and conditions carefully. Subject to the rights expressly reserved to others under Legal Notices, the content of this site and the compilation thereof is © 1999-2016 Nerds in Control, LLC. Todos los derechos reservados.


Users of this site are benefiting from open source technologies, including Linux. PHP. MySQL and Apache. Be happy. This page served by Yesod4 in the beautiful Blackstone Valley of Massachusetts, the home of the American Industrial Revolution.


Follow @c_com


Fortune I think we are in Rats' Alley where the dead men lost their bones. -- T. S. Eliot


You have clicked on the "?" button for search help. To search the site, enter your search terms in the box labeled "search the site" and hit Enter.


Some tips for better search results.


Precede each search term with a "+", as follows:


+Modbus +TCP


Otherwise, any post with either term will match.


Use double quotes around phrases, as follows:


+"Allen Bradley" +ethernet


Otherwise, posts containing these words in separate locations will match.


To exclude a word, precede it with a "-", as follows:


+Modbus - Plus


This will return only posts containing "Modbus" but NOT containing "Plus".


Note that common words (and, that, etc.) and words shorter than 2 characters are automatically excluded from searches.


Your subscription request is being processed.


You must be a Control. com member to subscribe to threads. Please log in and try again.


If you're not already a member, consider joining. It's free, and you can customize the content you view, as well as being able to subscribe to threads and topics, getting new posts delivered to your email as they appear.


Select the categories for which you would like to see messages displayed.


You must be a Control. com member to vote on a post. Please log in and try again.


If you're not already a member, consider joining. It's free, and you can customize the content you view, as well as being able to subscribe to threads and topics, getting new posts delivered to your email as they appear.


We noticed you may be on a mobile device. Would you like to change to our mobile site?


Iris Recognition Low Computational With Moving Average Filter


A shifting common filter averages numerous enter samples and produce a single output pattern. This averaging motion removes the excessive frequency elements current within the sign. Moving common filters are usually used as low move filters. In recursive filtering algorithm, earlier output samples are also taken for averaging. This is the rationale why it is impulse response extends to infinity. We have developed a low computational strategy for iris recognition based mostly on 1D shifting common filter. Simple averaging is used to scale back the consequences of noise and a significative enchancment in computational effectivity may be achieved if we carry out the calculation of the imply in a recursive style.


This code makes use of an optimized model of Libor Masek’s routines for iris segmentation obtainable here .


Libor Masek, Peter Kovesi. MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. The School of Computer Science and Software Engineering, The University of Western Australia, 2003.


Keyword: Matlab, supply, code, iris, recognition, shifting, common, filter, low, computational.


Complete your name and email to Download This .


Click Here For Your Donation In Order To Obtain The Source Code


Recognition Code


Get FREE Stuff Now!


Enter your name and email below to GET INSTANT ACCESS!


Your privacy is SAFE


Translate:


Archives


Meta


Copyright y copia; 2015 · OptimizePress. com · All Rights Reserved


Double exponential smoothing uses two constants and is better at handling trends


As was previously observed. Single Smoothing does not excel in following the data when there is a trend. This situation can be improved by the introduction of a second equation with a second constant, \(\gamma\), which must be chosen in conjunction with \(\alpha\).


Here are the two equations associated with Double Exponential Smoothing. $$ \begin S_t & = & \alpha y_t + (1 - \alpha)(S_ + b_ ) & & 0 \le \alpha \le 1 \\ & & \\ b_t & = & \gamma(S_t - S_ ) + (1 - \gamma) b_ & & 0 \le \gamma \le 1 \end $$ Note that the current value of the series is used to calculate its smoothed value replacement in double exponential smoothing.


Several methods to choose the initial values


As in the case for single smoothing, there are a variety of schemes to set initial values for \(S_t\) and \(b_t\) in double smoothing.


\(S_1\) is in general set to \(y_1\). Here are three suggestions for \(b_1\). $$ \begin b_1 & = & y_2 - y_1 \\ & & \\ b_1 & = & \frac \left[ (y_2 - y_1) + (y_3 - y_2) + (y_4 - y_3) \right] \\ & & \\ b_1 & = & \frac \end $$


Meaning of the smoothing equations


The first smoothing equation adjusts \(S_t\) directly for the trend of the previous period, \(b_ \), by adding it to the last smoothed value, \(S_ \). This helps to eliminate the lag and brings \(S_t\) to the appropriate base of the current value.


The second smoothing equation then updates the trend, which is expressed as the difference between the last two values. The equation is similar to the basic form of single smoothing, but here applied to the updating of the trend.


Non-linear optimization techniques can be used


The values for \(\alpha\) and \(\gamma\) can be obtained via non-linear optimization techniques, such as the Marquardt Algorithm.


%Student Dave's tutorial on: Image processing for object %tracking (aka giving eyes to your robot :) %Copyright Student Dave's Tutorials 2012 %if you would like to use this code, please feel free, just remember to %reference and tell your friends. ) %requires matlabs image processing toolbox


%What the heck does this code do!? %the code finds the hexbug buy using a series of basic, but effective %images processing techniques (formal talk for a second -->) : % 1) Averaged background subtraction % 2) Noise reduction via image smoothing using 2-d gaussian filter. % 3) Threshold and point detection in binary image.


clear all; close all; set(0,'DefaultFigureWindowStyle','docked') %dock the figures..just a personal preference you don't need this.


%% get listing of frames so that you can cycle through them easily. f_list = dir('*png');


%% make average of background images (i. e. images with no objects of interest) % Here we just read in a set of images (N) and then take the average of % them so that we are confident we got a good model of what the background % looks like (i. e. a template free from any potential weird image artifacts


N = 20; % num of frames to use to make averaged background. that is, images with no bug! img = zeros(288,352,N); %define image stack for averaging (if you don't know what this is, just load the image and check it with size()) for i = 1:N img_tmp = imread(f_list(i).name); %read in the given image img(. i) = img_tmp(. 1); % we don't really care about the rgb image values, so we just take the first dimension of the image end bck_img = (mean(img,3)); %take the average across the image stack..and bingo! there's your background template! subplot(121);imagesc(bck_img) subplot(122);imagesc(img(. 1)) colormap(gray) clear img; % free up memory.


%initialize gaussian filter


%using fspecial, we will make a gaussian template to convolve (pass over) %over the image to smooth it. hsize = 20; sigma = 10; gaus_filt = fspecial('gaussian',hsize. sigma); subplot(121); imagesc(gaus_filt) subplot(122); mesh(gaus_filt) colormap(jet)


%this one is just for making the coordinate locations more visible. SE = strel('diamond', 7); %another tool make for making fun matrice :) this one makes a matrice object for passing into imdilate())


%% iteratively (frame by frame) find bug! CM_idx = zeros(length(f_list),2); % initize the variable that will store the bug locations (x, y)


for i = 1:2:length(f_list)


img_tmp = double(imread(f_list(i).name)); %load in the image and convert to double too allow for computations on the image img = img_tmp(. 1); %reduce to just the first dimension, we don't care about color (rgb) values here. subplot(221);imagesc(img); title('Raw');


% %VERY HARD TRACKING %for frames 230:280, make the bug very hard to track if (i > 230) && (i < 280) && (mod(i,3) == 0 ) J = imnoise(img,'speckle'); img = img+J*200; end %>


%subtract background from the image sub_img = (img - bck_img); subplot(222);imagesc(sub_img); title('background subtracted'); %gaussian blurr the image gaus_img = filter2(gaus_filt, sub_img,'same'); subplot(223);imagesc(gaus_img); title('gaussian smoothed'); %threshold the image. here i just made a quick histogram to see what %value the bug was below subplot(224);hist(gaus_img(:)); thres_img = (gaus_img < -15); subplot(224);imagesc(thres_img); title('thresholded');


%% TRACKING! (i. e. get the coordinates of the bug ) %quick solution for finding center of mass for a BINARY image %basically, just get indices of all locations above threshold (1) and %take the average, for both the x and y directions. This will give you %the average location in each dimension, and hence the center of the %bug..unless of course, something else (like my hand) passes threshold %:P %if doesn't find anything, it randomly picks a pixel [x, y] = find (thres_img); Si


isempty(x) CM_idx(i,:) = ceil([mean(x) mean(y)]+1); % i used ceiling to avoid zero indices, but it makes the system SLIGHTLY biased, meh, no biggie, not the point here :). else CM_idx(i,:) = ceil([rand*200 rand*200]); fin


% %NOT SO HARD TRACKING %for frames 230:280, make the bugtracking just a lil noisy by randomly sampling %around the bugtracker if (i > 230) && (i < 280) && (mod(i,2) == 0 ) CM_idx(i,:) = [round(CM_idx(i,1) + randn*10) round(CM_idx(i,2) + randn*10)]; end %>


% %NO TRACKING %for frames 230:280, make the bugtracking just a lil noisy by randomly sampling %around the bugtracker if (i > 230) && (i < 280) CM_idx(i,:) = [NaN NaN]; end %>


%% now, we visual everything :)


%create a dilated dot at this point for visualize %make binary image with single coordinate of bug = 1 and rest zeros. %then dilate that point to make a more visible circle. bug_img = zeros(size(thres_img)); bug_img(CM_idx(i,1),CM_idx(i,2)) = 1;


% % if you are running the "no tracking segment above, you'll need to % skip over that segment, and thus use this code if


((i > 230) && (i < 280)) bug_img(CM_idx(i,1),CM_idx(i,2)) = 1; end %>


bug_img = imdilate(bug_img, SE); subplot(224);imagesc(thres_img + bug_img); title('thresholded and extracted (red diamond)'); axesHandles = get(gcf,'children'); set(axesHandles, 'XTickLabel', [], 'XTick', []); set(axesHandles, 'YTickLabel', [], 'YTick', []) ;


%save out the hexbug coordinates


% %nice and elegant solution for center of mass of gray scale image (i. e. doesn't have to be binary like in our case)---------------------------------- % http://www. mathworks. com/matlabcentral/newsreader/author/109726 %These next 4 lines produce a matrix C whose rows are % the pixel coordinates of the image A C=cellfun(@(n) 1:n, num2cell(size(thres_img)),'uniformoutput',0); [C ]=ndgrid(C ); C=cellfun(@(x) x(:), C,'uniformoutput',0); C=[C ]; %This line computes a weighted average of all the pixel coordinates. %The weight is proportional to the pixel value. CenterOfMass=thres_img(:).'*C/sum(thres_img(:),'double') %--------------------------------------------------------------------- %>


Running and Weighted Averages


Both running and weighted averages are important filtering methods for statistical analysis.


Running Average


Often used to illustrate climatic trends by temporally smoothing data.


Calculated by finding a number of successive means, each mean incorporating the same number of observations.


Each successive mean will drop first value of the mean interval and add the next value in the dataset to the next mean interval.


Tends to damp out extreme values and highlights the movement of data with time.


Also known as a moving average .


Ejemplo. Calculate a 3-year running average of gridded temperature anomaly data.


Locate Dataset and Variable


Select the "Datasets by Catagory" link in the blue banner on the Data Library page.


Click on the "Atmosphere" link.


Select the NOAA NCEP CPC CAMS dataset.


Select the "anomaly" link under the Datasets and Variables subheading.


Choose the "temperature anomaly" link, again located under the Datasets and Variables subheading. CHECK


Select Spatial Domain


Click on the "Data Selection" link in the function bar.


Enter the text 130W to 30W and 70S to 70N . in the appropriate text boxes.


Press the Restrict Ranges button and then the Stop Selecting button. CHECK


Calculate Running Average


Click on the "Expert Mode" link in the function bar.


Enter the following text below the text already there:


Press the OK button. CHECK The command above will compute the 3-month running average.


View Running Average


To see your results, choose the viewer with coasts drawn. CHECK The image depicts the running average of the first three months of the dataset, Jan-Mar 1950. The anomlies may be easier to see if we change the color scale.


Click on the right most link in the blue source bar to exit the viewer.


Enter the following text in the Expert Mode text box below the text already there:


Press the OK button. CHECK These commands format the color scale so that anomalies are easier to observe.


Replace the range with a three month period during an El Niño event: Jun-Aug 1983 .


Click the Redraw button. CHECK


Running Average of Gridded Temperature Anomaly Data at 130W-30W, 70S-70N The high positive anomalies off the western coast of South America are assocaiated with the El Niño event that summer.


Ejemplo. Calculate the 20-year running average of April precipitation for a location in the Pampas Region of Southern South America.


Locate Dataset and Variable


Select the "Datasets by Catagory" link in the blue banner on the Data Library page.


Click on the "Atmosphere" link.


Select the NOAA NCEP CPC CAMS dataset.


Select the "mean" link under the Datasets and Variables subheading.


Choose the "precipitation" link, again located under the Datasets and Variables subheading. CHECK


Select Spatial Domain


Click on the "Data Selection" link in the function bar.


Enter the text 60W and 25S . in the appropriate text boxes.


Press the Restrict Ranges button and then the Stop Selecting button. CHECK


Select Temporal Domain


Click on the "Expert Mode" link in the function bar.


Enter the following lines below the text already there:


Press the OK button. CHECK The splitstreamgrid function splits the time grid into two new time grids. The T grid has a period of 12 months and a step of 1. This grid represents data from January, Februrary, March, etc. The T2 grid has a step of 12 and is unperiodic. This grid represents the years from the beginning of the dataset to the end of the dataset. The next command, T (Apr) VALUES, will retain only the April values from the T grid.


View April Precipitation


To see the results of this operation, choose the time series viewer. CHECK Precipitation is labeled on the Y-axis in mm/month and time is labeled on the X-axis in years. Each X-axis value represents mean April precipitation for that year.


Mean April Precipitation at 60W, 25S for 1950 to 2000 Without smoothing the data, it may be difficult to recognize any trends over the 50-year span pictured above. Applying a running average, however, will often make any trend in the data more distinguishable.


Calculate Running Average


Click on the right most link in the blue source bar to exit the viewer.


Scroll down to the Grids subheading.


Notice under the Grids subheading that the new time grid created, T2, represents months since 1950 ordered from 1950 to 2000 by 12. Every 12 grid points in T2 correspond to 1 year. The 20-year running average is calculated over the T2 variable, and must be evaluated in months, not years.


In the Expert Mode text box, enter the following line below the text already there:


Press the OK button. CHECK


This command computes the 20-year (12*20 = 240 months) running average over T2.


View Running Average


To see the results of this operation, choose the time series viewer. CHECK


Running Mean of April Precipitation at 60W, 25S for 1950 to 2000. The increasing trend in April precipitation from 1950 to 2000 becomes visible after running averages are employed. Note that the time grid extends from 1960 - 1990. This is due to the fact that each successive mean in the running average is labeled according to its midpoint. For example, the first mean in the running average includes the interval April 1950 - April 1969, and is labled as April 1960.


Weighted Average


Differs from a regular average in that each value in the dataset is not represented equally.


The degree of "importance" of each data value, which determines how one value will be included in the average relative to the other values, is called the weight.


Determined by weighting each value in the series, adding together the weighted values, and then dividing by the total weight.


Often used to account for area changes between meridians at varying latitudes by using the cosine of the latitude as the weights.


Ejemplo. Find spatially weighted averages of monthly solar radiation.


Locate Dataset and Variable


Select the "Datasets by Catagory" link in the blue banner on the Data Library page.


Click on the "Air-Sea Interface" link.


Select the OSUSFC dataset.


Select the "Data" link under the Datasets and Variables subheading.


Choose the "solar radiation" link, again located under the Datasets and Variables subheading. CHECK Notice under the Grids subheading that the time variable is periodic from January to December. The OSUSFC dataset consists of monthly climatologies for each variable.


No ranges will be adjusted in this example. The dataset will be analyzed over its entire temporal and spatial grids.


Calculate Weighted Average


Click on the "Expert Mode" link in the function bar.


Enter the following text below the text already there:


Press the OK button. CHECK The above command calculates a spatial average, weighted with the cosine of the latitude. The resulting dataset is a time series from Jan to Dec of average solar radiation in W/m 2.


View Weighted Average


To see the results of this operation, choose the time series viewer. CHECK


Weighted Spatial Average of Monthly Climatological Solar Radiation


Averaged over the world, solar radiation is at a minimum near July and at a maximum near January.


View Differences Between Weighted Average and Non-weighted Average


Click on the right most link in the blue source bar to exit the viewer.


In the Expert Mode text box, enter the following text below the text already there:


Press the OK button. CHECK The commands above subtract the non-weighted average from the weighted average.


To see the results of this operation, choose the time series viewer. CHECK


Difference Between Weighted Average and Non-weighted Average of Monthly Climatological Solar Radiation The largest difference between the weighted average and the regular average occurs sometime near April and October. Refering to the previous graph of the weighted average, these are times when solar radiation is changing most rapidly.


MATLAB-based Model of Autoregressive Moving Average (ARMA) in Stock Prediction


ZHAI Zhi-rong, BAI Yan-ping(North University of China, Taiyuan Shanxi,030051)


Using time series observations in the effective moment to predict the value of in a future time and the establishment of autoregressive moving average(ARMA) model to MATLAB as a tool for individual stocks Yatai Group 360 trading days of data as a sample, predicted 10 day's closing price of the stock market;and contains a hidden layer with BP network model, results showed that autoregressive moving average(ARMA) model for short-term stock price prediction algorithm has high precision.


CAJViewer7.0 supports all the CNKI file formats; AdobeReader only supports the PDF format.


SUN Dan 1, ZHANG Xiu-yan 2 (1.School of Information Science, Tianjin Business College, Tianjin 300122, China; 2.College of Business, Jilin University, Changchun 130012, China);Forecast model for stock market based on artificial neural networks [J];Journal of Changchun Post and Telecommunication Institute;2002-04


WU Wei1, CHEN Wei-qiang2, LIU Bo2 ( 1. Dept. of Appl. Math. Dalian Univ. of Technol. Dalian 116024, China; 2. Dept. of Math. Jilin Univ. Jilin 130023, China );Prediction of ups and downs of stoc k market by BP neural networks [J];Journal of Dalian University of Technology;2001-01


L Shu-ping, ZHAO Yong-mei(School of Automation, Harbin Engineering University, Harbin 150001,China);The method and application of time series prediction based wavelet neural network [J];Journal of Harbin Engineering University;2004-02


8.9 Kaufman Adaptive Moving Average


The Kaufman adaptive moving average (KAMA) by Perry J. Kaufman (http://www. perrykaufman. com ) is an exponential style average (see Exponential Moving Average ) but with a smoothing that varies according to recent data. In a steadily progressing move the latest prices are tracked closely, but when going back and forward with little direction the latest prices are given only small weights.


The smoothing factor is determined from a given past N days closes (default 10). The net change (up or down) over that time is expressed as a fraction of the total daily changes (up and down). This is called the “efficiency ratio”. If every day goes in the same direction then the two amounts are the same and the ratio is 1. But in the usual case where prices backtrack to some extent then the net will be smaller than the total. The ratio is 0 if no net change at all.


The ER is rescaled to between 0.0645 and 0.666 and then squared to give an alpha factor for the EMA of between 0.444 and 0.00416. This corresponds to EMA periods from a fast 3.5 days to a very slow 479.5 days.


An exponential moving average of prices is then taken, using each alpha value calculated.


These alpha values can be viewed directly with “KAMA alpha” in the lower indicator window (a low priority option, near the end of the lists). High values show where KAMA is tracking recent prices closely, low values show where it’s responding only slowly.


8.9.1 Additional Resources


Copyright 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009 Kevin Ryde


Chart is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version.


EWMA 101


The EWMA approach has one attractive feature: it requires relatively little stored data. To update our estimate at any point, we only need a prior estimate of the variance rate and the most recent observation value.


A secondary objective of EWMA is to track changes in the volatility. For small values, recent observations affect the estimate promptly. For values closer to one, the estimate changes slowly based on recent changes in the returns of the underlying variable.


The RiskMetrics database (produced by JP Morgan and made public available) uses the EWMA with for updating daily volatility.


IMPORTANT: The EWMA formula does not assume a long run average variance level. Thus, the concept of volatility mean reversion is not captured by the EWMA. The ARCH/GARCH models are better suited for this purpose.


Lambda


A secondary objective of EWMA is to track changes in the volatility, so for small values, recent observation affect the estimate promptly, and for values closer to one, the estimate changes slowly to recent changes in the returns of the underlying variable.


The RiskMetrics database (produced by JP Morgan) and made public available in 1994, uses the EWMA model with for updating daily volatility estimate. The company found that across a range of market variables, this value of gives forecast of the variance that come closest to realized variance rate. The realized variance rates on a particular day was calculated as an equally-weighted average of on the subsequent 25 days.


Similarly, to compute the optimal value of lambda for our data set, we need to calculate the realized volatility at each point. There are several methods, so pick one. Next, calculate the sum of squared errors (SSE) between EWMA estimate and realized volatility. Finally, minimize the SSE by varying the lambda value.


Sounds simple? It is. The biggest challenge is to agree on an algorithm to compute realized volatility. For instance, the folks at RiskMetrics chose the subsequent 25-day to compute realized variance rate. In your case, you may choose an algorithm that utilizes Daily Volume, HI/LO and/or OPEN-CLOSE prices.


Preguntas más frecuentes


Q 1: Can we use EWMA to estimate (or forecast) volatility more than one step ahead?


The EWMA volatility representation does not assume a long-run average volatility, and thus, for any forecast horizon beyond one-step, the EWMA returns a constant value:


Advanced Source Code. Com


. Click here to download.


Genetic algorithms belong to a class of machine learning algorithms that have been successfully used in a number of research areas. There is a growing interest in their use in financial economics but so far there has been little formal analysis. In stock market, a technical trading rule is a popular tool for analysts and users to do their research and decide to buy or sell their shares. The key issue for the success of a trading rule is the selection of values for all parameters and their combinations. However, the range of parameters can vary in a large domain, so it is difficult for users to find the best parameter combination. By using a genetic algorithm, we can look for both the structure and the parameters of the rules at the same time. We have optimized a trading system that has been developed by Alfredo Rosa using genetic algorithms . a new, complex 16-bars trading rule has been discovered and tested on Italian FIB with brilliant results.


Index Terms: Matlab, source, code, data mining, trading system, stock market prediction, trading rule extraction, genetic algorithms, trading systems, bar chart, candlestick chart, price patterns, parameter combination.


Figure 1. Genetic structure


An optimized complex price pattern discovered by genetic algorithms.


Demo code (protected P-files) available for performance evaluation. Matlab Financial Toolbox, Genetic Algorithm and Direct Search Toolbox are required.


We recommend to check the secure connection to PayPal, in order to avoid any fraud. This donation has to be considered an encouragement to improve the code itself.


Genetic Trading System - Click here for your donation. In order to obtain the source code you have to pay a little sum of money: 90 EUROS (less than 126 U. S. Dollars).


Once you have done this, please email us luigi. rosa@tiscali. it As soon as possible (in a few days) you will receive our new release of Genetic Trading System.


Alternatively, you can bestow using our banking coordinates:


Description: Biometrika is primarily a journal of statistics in which emphasis is placed on papers containing original theoretical contributions of direct or potential value in applications. From time to time, papers in bordering fields are published.


Coverage: 1901-2010 (Vol. 1, No. 1 - Vol. 97, No. 4)


The "moving wall" represents the time period between the last issue available in JSTOR and the most recently published issue of a journal. Moving walls are generally represented in years. In rare instances, a publisher has elected to have a "zero" moving wall, so their current issues are available in JSTOR shortly after publication. Note: In calculating the moving wall, the current year is not counted. For example, if the current year is 2008 and a journal has a 5 year moving wall, articles from the year 2002 are available.


Terms Related to the Moving Wall Fixed walls: Journals with no new volumes being added to the archive. Absorbed: Journals that are combined with another title. Complete: Journals that are no longer published or that have been combined with another title.


Subjects: Science & Mathematics, Statistics


Abstracto


The problem considered is that of estimating an autoregressive-moving average system, including estimating the degrees of the autoregressive and moving average lag operators. The basic method is that introduced by Hannan & Rissanen (1982). However, that method may sometimes overestimate the degrees and modifications are here introduced to correct this. The problem is itself due to the use of a long autoregression, of order c log T when T is large, in the first stage of the process. The effect of this is investigated and in particular its effect on the speed of convergence of the estimates.


Page Thumbnails


MATLAB - Arrays


All variables of all data types in MATLAB are multidimensional arrays. A vector is a one-dimensional array and a matrix is a two-dimensional array.


We have already discussed vectors and matrices. In this chapter, we will discuss multidimensional arrays. However, before that, let us discuss some special types of arrays.


Special Arrays in MATLAB


In this section, we will discuss some functions that create some special arrays. For all these functions, a single argument creates a square array, double arguments create rectangular array.


The zeros() function creates an array of all zeros −


For example −


MATLAB will execute the above statement and return the following result −


The ones() function creates an array of all ones −


For example −


MATLAB will execute the above statement and return the following result −


The eye() function creates an identity matrix.


For example −


MATLAB will execute the above statement and return the following result −


The rand() function creates an array of uniformly distributed random numbers on (0,1) −


For example −


MATLAB will execute the above statement and return the following result −


A Magic Square


A magic square is a square that produces the same sum, when its elements are added row-wise, column-wise or diagonally.


The magic() function creates a magic square array. It takes a singular argument that gives the size of the square. The argument must be a scalar greater than or equal to 3.


MATLAB will execute the above statement and return the following result −


Multidimensional Arrays


An array having more than two dimensions is called a multidimensional array in MATLAB. Multidimensional arrays in MATLAB are an extension of the normal two-dimensional matrix.


Generally to generate a multidimensional array, we first create a two-dimensional array and extend it.


For example, let's create a two-dimensional array a.


MATLAB will execute the above statement and return the following result −


The array a is a 3-by-3 array; we can add a third dimension to a . by providing the values like −


MATLAB will execute the above statement and return the following result −


We can also create multidimensional arrays using the ones(), zeros() or the rand() functions.


MATLAB will execute the above statement and return the following result −


We can also use the cat() function to build multidimensional arrays. It concatenates a list of arrays along a specified dimension −


Syntax for the cat() function is −


B is the new array created


A1 . A2 . are the arrays to be concatenated


dim is the dimension along which to concatenate the arrays


Ejemplo


Create a script file and type the following code into it −


When you run the file, it displays −


Array Functions


MATLAB provides the following functions to sort, rotate, permute, reshape, or shift array contents.


The following examples illustrate some of the functions mentioned above.


Length, Dimension and Number of elements:


Create a script file and type the following code into it −


When you run the file, it displays the following result −


Circular Shifting of the Array Elements −


Create a script file and type the following code into it −


When you run the file, it displays the following result −


Sorting Arrays


Create a script file and type the following code into it −


When you run the file, it displays the following result −


Cell Array


Cell arrays are arrays of indexed cells where each cell can store an array of a different dimensions and data types.


The cell function is used for creating a cell array. Syntax for the cell function is −


Where,


C is the cell array;


dim is a scalar integer or vector of integers that specifies the dimensions of cell array C;


dim1. dimN are scalar integers that specify the dimensions of C;


obj is One of the following:


Java array or object


.NET array of type System. String or System. Object


Ejemplo


Create a script file and type the following code into it −


When you run the file, it displays the following result −


Accessing Data in Cell Arrays


There are two ways to refer to the elements of a cell array −


Enclosing the indices in first bracket (), to refer to sets of cells


Enclosing the indices in braces <>, to refer to the data within individual cells


When you enclose the indices in first bracket, it refers to the set of cells.


Cell array indices in smooth parentheses refer to sets of cells.


MATLAB will execute the above statement and return the following result −


You can also access the contents of cells by indexing with curly braces.


For example −


MATLAB will execute the above statement and return the following result −


Using the Moving Average Tool from the Excel 2007 and Excel 2010 Analysis ToolPak


Introducción


Welcome to my latest hub on the Analysis ToolPak in both Excel 2007 and Excel 2010. Today, I will look at how to use the Moving Average tool. This tool is used when you want to perform trend analysis on a sequence of data. In my example, I am looking at the number of overall hits my hubs get on a daily basis. I want to determine if my daily hits are trending upwards and the Moving Average Tool will create a trend line as part of the analysis to show me if this is the case (hopefully it is).


Moving averages use an interval to calculate the average over time. The interval chosen is the number of values Excel 2007 or Excel 2010 will average to create the trend line. I will go through how to determine the best interval to fit your data and give you the most accurate and meaningful trend line.


Before beginning, I have a hub that covers adding the Analysis ToolPak to Excel if it is not installed and also troubleshooting if it is installed but does not appear in Excel 2007 or Excel 2010. The hub can be found here:


When we have completed our analysis on the data using the Moving Average tool, we will be provided with a graph showing our actual data and the trend that the tool has calculated.


Example of a Moving Average with a trend line, created using the Moving Average Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Using the Moving Average Tool in Excel 2007 and Excel 2010


Provided you have the Analysis ToolPak installed in either Excel 2007 or Excel 2010, the ToolPak can be found on the Data tab, in the Analysis group.


To use it, simply click the Data Analysis button (you do not have to select any data before hand).


Next, we select Moving Average from the list of available Analysis Tools


The Moving Average dialogue box will now open


To begin, select the Input Range . The range must be in a column and must also be contiguous (have no breaks). This also includes your labels should you select them


Press Return or Enter to select the range


Select the option for Labels in First Row if you have labels in that first row


For Interval . leave this at the default of 3. We will discuss intervals in more detail in a further section below


Select an Output Range under Output options (in my example, I chose the next column to keep it simple


Select Chart Output so that Excel 2007 or Excel 2010 will create a chart for you as this is most likely what you will refer to most when using the Moving Average tool


Leave Standard Errors blank (this creates a second column that contains the data) as it is unlikely that you will use it


Click OK and Excel will create a column containing the moving averages and a chart


Note: The chart that is created is for some reason very small, you will need to re-size it to be able to actually see the detail. The figure below will give you an idea of how it looks once Excel has completed the analysis on your data.


Example of the initial output created using the Moving Average Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Note: As you can see from the figure above, the first two cells in the Moving Average column (I have highlighted the cells) have #N/A in them. This is normal as Excel cannot perform the moving average until it has three values (the interval in other words).


I have tidied up the chart below to show my data with a moving average of three days. I have a hub that covers creating and editing charts in much greater detail which can be found here:


As you can see, the trend line (forecast) is very variable and the trend is not that easy to see. We will discuss selecting an appropriate interval in next the section.


Moving Average with an interval that does not show a definitive trend created using the Moving Average Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Choosing the correct interval for your moving average in Excel 2007 and Excel 2010


As you can see from the figure above, the moving average is extremely variable and does not illustrate a useful trend or forecast. Getting the correct interval is crucial so that you can easily see the trend in your data.


Clearly in my example, using a forecast with an interval of three days would not provide me with any valuable data. The trend is becoming evident in the second chart in the figure below with the interval of seven day and is most clear in the thirty day moving average, so that is the one I would choose.


Moving Averages with different intervals created using the Moving Average Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Two things become immediately apparent from those graphs.


The trend becomes clearer, the higher the interval


The data series needs to be longer if you choose a longer interval (the gap between the start of the data series and the start of the trend line becomes greater the higher interval you choose)


In order to choose the most appropriate interval you need to balance the two factors above, choosing a sufficiently long interval to show the best trend line based on the amount of data you have.


Adding a trend line to a chart in Excel 2007 and Excel 2010


There is another method to add a trend line to a chart. This has advantages and disadvantages to using the Moving Average tool (I have added a trend line to my data in the figure below):


Trend line added to a chart in Excel 2007 and Excel 2010.


Adding a trend line allows you to forecast backwards and forwards


The line is linear and much tidier (useful to show the trend in a presentation)


Very easy to add to an existing chart


The underlying data used to create the trend line is not available


Standard Error calculation is not available


So in summary, if you want a nice straight line showing trends either forwards or backwards, use a trend line. If you are interested in the underlying mathematics or the data of the moving averages, use the more powerful Moving Average tool from the Analysis ToolPak.


I also have a hub that investigates creating trend lines to charts in much greater detail, that hub can be found here:


Conclusión


Moving averages is a useful and powerful mathematical tool for calculating a trend in a data series. Using the Moving Average tool from the Analysis ToolPak in Excel 2007 and Excel 2010, I was able to show that the daily traffic coming into my hubs is indeed trending upwards.


In this hub, I illustrated:


How to use the Moving Average tool and also


Looked at how to choose an appropriate interval for your data


I also discussed adding a trend line to an existing chart and compared using this to using the Moving Average tool


I have a number of hubs covering other popular tools from the Analysis ToolPak in both Excel 2007 and Excel 2010. These include:


Example of a Histogram, created using the Histogram Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Example of a Ranking Table, created using the Rank and Percentile Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Example of a Regression, created using the Regression Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Example of a Correlation, created using the Correlation Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Example of a table showing daily variation, created using the Sampling Tool from the Analysis Toolpak in Excel 2007 and Excel 2010.


Histogram . is another tool that creates a chart of your data. This tool looks at the distribution of your data across boundaries you define. In my hub on this tool, I look at the distribution of exam results across grade boundaries:


Rank and Percentile is used to rank your data and assign a percentile to each unique value. I used this tool to rank students exam results and assign them a grade based on their position in that ranked list:


Correlation and Regression look at the relationship between variables. Correlation measures the strength of a relationship and regression creates a line that shows this relationship. In my hub on correlation, I examine the relationship between daily temperatures and pie sales and in my hub on regression, I look at the relationship between fish mortality and Phosphate and Nitrogen concentrations in water.


Sampling allows you to create a randomly chosen sample from a population and perform analysis on it. I use sampling to pick lottery numbers in my hub.


Many thanks for reading and I hope that enjoyed reading this as much as I enjoyed writing it and that you found it useful and informative. Please feel free to leave any comments you may have below.


And Finally.


Which Tool from the Analysis ToolPak in Excel 2007 and Excel 2010 do you intend to (or already regularly) use?


There is a great deal of market lore related to the US presidential elections. It is generally held that elections are good for the market, regardless of whether the incoming president is Democrat or Republican. To examine this thesis, I gathered data on presidential elections since 1950, considering only the first term of each newly elected president. My reason for considering first terms only was twofold: firstly, it might be expected that a new president is likely to exert a greater influence during his initial term in office and secondly, the 2016 contest will likewise see the appointment of a new president (rather than the re-election of a current one).


Market Performance Post Presidential Elections


The table below shows the 11 presidential races considered, with sparklines summarizing the cumulative return in the S&P 500 Index in the 12 month period following the start of the presidential term of office. The majority are indeed upward sloping, as is the overall average.


A more detailed picture emerges from the following chart. It transpires that the generally positive “presidential effect” is due overwhelmingly to the stellar performance of the market during the first year of the Gerald Ford and Barack Obama presidencies. In both cases presidential elections coincided with the market nadir following, respectively, the 1973 oil crisis and 2008 financial crisis, after which the economy staged a strong recovery.


Democrat vs. Republican Presidencies


There is a marked difference in the average market performance during the first year of a Democratic presidency vs. a Republican presidency. Doubtless, plausible explanations for this disparity are forthcoming from both political factions. On the Republican side, it could be argued that Democratic presidents have benefitted from the benign policies of their (often) Republican predecessors, while incoming Republican presidents have had to clean up the mess left to them by their Democratic predecessors. Democrats would no doubt argue that the market, taking its customary forward view, tends to react favorably to the prospect of a more enlightened, liberal approach to the presidency (aka more government spending).


Market Performance Around the Start of Presidential Terms


I shall leave such political speculations to those interested in pursuing them and instead focus on matters of a more apolitical nature. Specifically, we will look at the average market returns during the twelve months leading up to the start of a new presidential term, compared to the average returns in the twelve months after the start of the term. Los resultados son los siguientes:


The twelve months leading up to the start of the presidential term are labelled -12, -11, …, -1, while the following twelve months are labelled 1, 2, …. 12. The start of the term is designated as month zero, while months that fall outside the 24 month period around the start of a presidential term are labelled as month 13.


The key finding stands out clearly from the chart: namely, that market returns during the start month of a new presidential term are distinctly negative, averaging -3.3%. while returns in the first month after the start of the term are distinctly positive, averaging 2.81%.


Assuming that market returns are approximately Normally distributed, a standard t-test rejects the null hypothesis of no difference in the means of the month 0 and month 1 returns, at the 2% confidence level. In other words, the “presidential effect” is both large and statistically significant.


Conclusion: Trading the Election


Given the array of candidates before the electorate this election season, I am strongly inclined to take the trade. The market will certainly “feel the Bern” in the unlikely event that Bernie Sanders is elected president. I can even make an argument for a month 1 recovery, when the market realizes that there are limits to how much economic damage even a Socialist president can do, given constitutional checks and balances, “pen and phone” not withstanding.


Again, an incoming president Trump is likely to be greeted by a sharp market sell-off, based on jittery speculation about the Donald’s proclivity to start a trade war with China, or Mexico, or a ground war with Russia, Iran, or anyone else. Likewise, however, the market will fairly quickly come around to the realization that electioneering rhetoric is unlikely to provide much guidance as to what a president Trump is likely to do in practice.


A Hillary Clinton presidency is likely to be seen, ex-ante, as the most benign for the market, especially given the level of (financial) support she has received from Wall Street. However, there’s a glitch: Bernie is proving much tougher to shake off than she could ever have anticipated. In order to win over his supporters, she is going to have to move out of the center ground, towards the left. Who knows what hostages to fortune a desperate Clinton is likely to have to offer the election gods in her bid to secure the White House?


In terms of the mechanics, while you could take the trade in ETF’s or futures, this is one of those situations ideally suited to options and I am inclined to suggest combining a front-month put spread with a back-month call spread.


One of the most commonly cited maxims is that market timing is impossible. In fact, empirical evidence makes a compelling case that market timing is feasible and can yield substantial economic benefits. What’s more, we even understand why it works. For the typical portfolio investor, applying simple techniques to adjust their market exposure can prevent substantial losses during market downturns.


The Background From Empirical and Theoretical Research


For the last fifty years, since the work of Paul Samuelson, the prevailing view amongst economists has been that markets are (mostly) efficient and follow a random walk. Empirical evidence to the contrary was mostly regarded as anomalous and/or unimportant economically. Over time, however, evidence has accumulated that market effects may persist that are exploitable. The famous 1992 paper published by Fama and French, for example, identified important economic effects in stock returns due to size and value factors, while Cahart (1997) demonstrated the important incremental effect of momentum. The combined four-factor Cahart model explains around 50% of the variation in stock returns, but leaves a large proportion that cannot be accounted for.


Other empirical studies have provided evidence that stock returns are predictable at various frequencies. Important examples include work by Brock, Lakonishok and LeBaron (1992), Pesaran and Timmermann (1995) and Lo, Mamaysky and Wang (2000), who provide further evidence using a range of technical indicators with wide popularity among traders showing that this adds value even at the individual stock level over and above the performance of a stock index. The research in these and other papers tends to be exceptional in term of both quality and comprehensiveness, as one might expect from academics risking their reputations in taking on established theory. The appendix of test results to the Pesaran and Timmermann study, for example, is so lengthy that is available only in CD-ROM format.


A more recent example is the work of Paskalis Glabadanidis, in a 2012 paper entitled Market Timing with Moving Averages. Glabadanidis examines a simple moving average strategy that, he finds, produces economically and statistically significant alphas of 10% to 15% per year, after transaction costs, and which are largely insensitive to the four Cahart factors.


Glabadanidis reports evidence regarding the profitability of the MA strategy in seven international stock markets. The performance of the MA strategies also holds for more than 18,000 individual stocks. He finds that:


“The substantial market timing ability of the MA strategy appears to be the main driver of the abnormal returns.”


An Illustration of a Simple Marketing Timing Strategy in SPY


It is impossible to do justice to Glabadanidis’s research in a brief article and the interested reader is recommended to review the paper in full. However, we can illustrate the essence of the idea using the SPY ETF as an example.


A 24-period moving average of the monthly price series over the period from 1993 to 2016 is plotted in red in the chart below.


The moving average indicator is used to time the market using the following simple rule:


if P t >= MA t invest in SPY in month t+1


if P t < MA t invest in T-bills in month t+1


In other words, we invest or remain invested in SPY when the monthly closing price of the ETF lies at or above the 24-month moving average, otherwise we switch our investment to T-Bills.


The process of switching our investment will naturally incur transaction costs and these are included in the net monthly returns.


The outcome of the strategy in terms of compound growth is compared to the original long-only SPY investment in the following chart.


The marketing timing strategy outperforms the long-only ETF, with a CAGR of 16.16% vs. 14.75% (net of transaction costs), largely due to its avoidance of the major market sell-offs in 2000-2003 and 2008-2009.


But the improvement isn’t limited to a 141bp improvement in annual compound returns. The chart below compares the distributions of monthly returns in the SPY ETF and market timing strategy.


It is clear that, in addition to a higher average monthly return, the market timing strategy has lower dispersion in the distribution in returns. This leads to a significantly higher information ratio for the strategy compared to the long-only ETF. Nor is that all: the market timing strategy has both higher skewness and kurtosis, both desirable features.


These results are entirely consistent with Glabadanidis’s research. He finds that the performance of the market timing strategy is robust to different lags of the moving average and in subperiods, while investor sentiment, liquidity risks, business cycles, up and down markets, and the default spread cannot fully account for its performance. The strategy works just as well with randomly generated returns and bootstrapped returns as it does for the more than 18,000 stocks in the study.


A follow-up study by the author applying the same methodology to a universe of 20 REIT indices and 274 individual REITs reaches largely similar conclusions.


Why Marketing Timing Works


For many investors, empirical evidence – compelling though it may be – is not enough to make market timing a credible strategy, absent some kind of “fundamental” explanation of why it works. Unusually, in the case of the simple moving average strategy, such explanation is possible.


It was Cox, Ross and Rubinstein who in 1979 developed the binomial model as a numerical method for pricing options. The methodology relies on the concept of option replication, in which one constructs a portfolio comprising holdings of the underlying stock and bonds to produce the same cash flows as the option at every point in time (the proportion of stock to hold is given by the option delta). Since the replicating portfolio produces the same cash flows as the option, it must have the same value and since once knows the price of the stock and bond at each point in time one can therefore price the option. For those interested in the detail, Wikipedia gives a detailed explanation of the technique.


We can apply the concept of option replication to construct something very close the MA market timing strategy, as follows. Consider what happens when the ETF falls below the moving average level. In that case we convert the ETF portfolio to cash and use the proceeds to acquire T-Bills. An equivalent outcome would be achieved by continuing to hold our long ETF position and acquiring a put option to hedge it. The combination of a long ETF position, and a 1-month put option with delta of -1, would provide the same riskless payoff as the market timing strategy, i. e. the return on 30-day T-Bills. An option in which the strike price is based on the average price of the underlying is known as an Arithmetic Asian option. Hence when we apply the MA timing strategy we are effectively constructing a dynamic portfolio that replicates the payoff of an Arithmetic Asian protective put option struck as (just above) the moving average level.


Market Timing Alpha and The Cost of Hedging


None of this explanation is particularly contentious – the theory behind option replication through dynamic hedging is well understood – and it provides a largely complete understanding of the way the MA market timing strategy works, one that should satisfy those who are otherwise unpersuaded by arguments purely from empirical research.


There is one aspect of the foregoing description that remains a puzzle, however. An option is a valuable financial instrument and the owner of a protective put of the kind described can expect to pay a price amounting to tens or perhaps hundreds of basis points. Of course, in the market timing strategy we are not purchasing a put option per se, but creating one synthetically through dynamic replication. The cost of creating this synthetic equivalent comprises the transaction costs incurred as we liquidate and re-assemble our portfolio from month to month, in the form of bid/ask spread and commissions. According to efficient market theory, one should be indifferent as to whether one purchases the option at a fair market price or constructs it synthetically through replication – the cost should be equivalent in either case. And yet in empirical tests the cost of the synthetic protective put falls far short of what one would expect to pay for an equivalent option instrument. This is, in fact, the source of the alpha in the market timing strategy.


According to efficient market theory one might expect to pay something of the order of 140 basis points a year in transaction costs – the difference between the CAGR of the market timing strategy and the SPY ETF – in order to construct the protective put. Yet, we find that no such costs are incurred.


Now, it might be argued that there is a hidden cost not revealed in our simple study of a market timing strategy applied to a single underlying ETF, which is the potential costs that could be incurred if the ETF should repeatedly cross and re-cross the level of the moving average, month after month. In those circumstances the transaction costs would be much higher than indicated here. The fact that, in a single example, such costs do not arise does not detract in any way from the potential for such a scenario to play out. Therefore, the argument goes, the actual costs from the strategy are likely to prove much higher over time, or when implemented for a large number of stocks.


All well and good, but this is precisely the scenario that Glabadanidis’s research addresses, by examining the outcomes, not only for tens of thousands of stocks, but also using a large number of scenarios generated from random and/or bootstrapped returns. If the explanation offered did indeed account for the hidden costs of hedging, it would have been evident in the research findings.


Instead, Glabadanidis concludes:


“This switching strategy does not involve any heavy trading when implemented with break-even transaction costs, suggesting that it will be actionable even for small investors.”


Implications For Current Market Conditions


As at the time of writing, in mid-February 2016, the price of the SPY ETF remains just above the 24-month moving average level. Consequently the market timing strategy implies one should continue to hold the market portfolio for the time being, although that could change very shortly, given recent market action.


Conclusión


The empirical evidence that market timing strategies produce significant alphas is difficult to challenge. Furthermore, we have reached an understanding of why they work, from an application of widely accepted option replication theory. It appears that using a simple moving average to time market entries and exits is approximately equivalent to hedging a portfolio with a protective Arithmetic Asian put option.


What remains to be answered is why the cost of constructing put protection synthetically is so low. At the current time, research indicates that market timing strategies consequently are able to generate alphas of 10% to 15% per annum.


Referencias


Brock, W. Lakonishok, J. LeBaron, B. 1992, “Simple Technical Trading Rules and the Stochastic Properties of Stock Returns,” Journal of Finance 47, pp. 1731-1764.


Carhart, M. M. 1997, “On Persistence in Mutual Fund Performance,” Journal of Finance 52, pp. 57–82.


Fama, E. F. French, K. R. 1992, “The Cross-Section of Expected Stock Returns,” Journal of Finance 47(2), 427–465


Glabadanidis, P. 2012, “Market Timing with Moving Averages”, 25th Australasian Finance and Banking Conference.


Glabadanidis, P. 2012, “The Market Timing Power of Moving Averages: Evidence from US REITs and REIT Indexes”, University of Adelaide Business School.


Lo, A. Mamaysky, H. Wang, J. 2000, “Foundations of Technical Analysis: Computational Algorithms, Statistical Inference, and Empirical Implementation,” Journal of Finance 55, 1705–1765.


Pesaran, M. H. Timmermann, A. G. 1995, “Predictability of Stock Returns: Robustness and Economic Significance”, Journal of Finance, Vol. 50 No. 4


Jeremy Grantham: A Bullish Bear


Is Jeremy Grantham, co-founder and CIO of GMO, bullish or bearish these days? According to Myles Udland at Business Insider. he’s both. He quotes Grantham:


“I think the global economy and the U. S. in particular will do better than the bears believe it will because they appear to underestimate the slow-burning but huge positive of much-reduced resource prices in the U. S. and the availability of capacity both in labor and machinery.”


“On top of all this is the decline in profit margins, which Grantham has called the “most mean-reverting series in finance,” implying that the long period of elevated margins we’ve seen from American corporations is most certainly going to come an end. Y así. “


Corporate Profit Margins as a Leading Indicator


The claim is an interesting one. It certainly looks as if corporate profit margins are mean-reverting and, possibly, predictive of recessionary periods. And there is an economic argument why this should be so, articulated by Grantham as quoted in an earlier Business Insider article by Sam Ro:


“Profit margins are probably the most mean-reverting series in finance, and if profit margins do not mean-revert, then something has gone badly wrong with capitalism.


If high profits do not attract competition, there is something wrong with the system and it is not functioning properly.”


Thomson Research / Barclays Research’s take on the same theme echoes Grantham:


“The link between profit margins and recessions is strong,” Barclays’ Jonathan Glionna writes in a new note to clients. “We analyze the link between profit margins and recessions for the last seven business cycles, dating back to 1973. The results are not encouraging for the economy or the market. In every period except one, a 0.6% decline in margins in 12 months coincided with a recession .”


Buffett Weighs in


Even Warren Buffett gets in on the act (from 1999):


“In my opinion, you have to be wildly optimistic to believe that corporate profits as a percent of GDP can, for any sustained period, hold much above 6%.”


With the Illuminati chorusing as one on the perils of elevated rates of corporate profits, one would be foolish to take a contrarian view, perhaps. And yet, that claim of Grantham’s (“probably the most mean-reverting series in finance”) poses a challenge worthy of some analysis. Let’s take a look.


The Predictive Value of Corporate Profit Margins


First, let’s reproduce the St Louis Fed chart:


Corporate Profit Margins


A plot of the series autocorrelations strongly suggests that the series is not at all mean-reverting, but non-stationary, integrated order 1:


Next, we conduct an exhaustive evaluation of a wide range of time series models, including seasonal and non-seasonal ARIMA and GARCH:


The best fitting model (using the AIC criterion) is a simple ARMA(0,1,0) model, integrated order 1, as anticipated. The series is apparently difference-stationary, with no mean-reversion characteristics at all. Diagnostic tests indicate no significant patterning in the model residuals:


Ljung-Box Test Probabilities


Using the model to forecast a range of possible values of the Corporate Profit to GDP ratio over the next 8 quarters suggests a very wide range, from as low as 6% to as high as 13%!


CONCLUSIÓN


The opinion of investment celebrities like Grantham and Buffett to the contrary, there really isn’t any evidence in the data to support the suggestion that corporate profit margins are mean reverting, even though common-sense economics suggests they should be.


The best-available econometric model produces a very wide range of forecasts of corporate profit rates over the next two years, some even higher than they are today.


If a recession is just around the corner, corporate profit margins aren’t going to call it for us.


Market Noise and Alpha Signals


One of the perennial problems in designing trading systems is noise in the data, which can often drown out an alpha signal. This is turn creates difficulties for a trading system that relies on reading the signal, resulting in greater uncertainty about the trading outcome (i. e. greater volatility in system performance). According to academic research, a great deal of market noise is caused by trading itself. There is apparently not much that can be done about that problem: sure, you can trade after hours or overnight, but the benefit of lower signal contamination from noise traders is offset by the disadvantage of poor liquidity. Hence the thrust of most of the analysis in this area lies in the direction of trying to amplify the signal, often using techniques borrowed from signal processing and related engineering disciplines.


There is, however, one trick that I wanted to share with readers that is worth considering. It allows you to trade during normal market hours, when liquidity is greatest, but at the same time limits the impact of market noise.


Quantifying Market Noise


How do you measure market noise? One simple approach is to start by measuring market volatility, making the not-unreasonable assumption that higher levels of volatility are associated with greater amounts of random movement (i. e noise). Conversely, when markets are relatively calm, a greater proportion of the variation is caused by alpha factors. During the latter periods, there is a greater information content in market data – the signal:noise ratio is larger and hence the alpha signal can be quantified and captured more accurately.


For a market like the E-Mini futures, the variation in daily volatility is considerable, as illustrated in the chart below. The median daily volatility is 1.2%, while the maximum value (in 2008) was 14.7%!


The extremely long tail of the distribution stands out clearly in the following histogram plot.


Obviously there are times when the noise in the process is going to drown out almost any alpha signal. What if we could avoid such periods?


Noise Reduction and Model Fitting


Let’s divide our data into two subsets of equal size, comprising days on which volatility was lower, or higher, than the median value. Then let’s go ahead and use our alpha signal(s) to fit a trading model, using only data drawn from the lower volatility segment.


This is actually a little tricky to achieve in practice: most software packages for time series analysis or charting are geared towards data occurring at equally spaced points in time. One useful trick here is to replace the actual date and time values of the observations with sequential date and time values, in order to fool the software into accepting the data, since there are no longer any gaps in the timestamps. Of course, the dates on our time series plot or chart will be incorrect. But that doesn’t matter: as long as we know what the correct timestamps are.


An example of such a system is illustrated below. The model was fitted to 3-Min bar data in EMini futures, but only on days with market volatility below the median value, in the period from 2004 to 2015. The strategy equity curve is exceptionally smooth, as might be expected, and the performance characteristics of the strategy are highly attractive, with a 27% annual rate of return, profit factor of 1.58 and Sharpe Ratio approaching double-digits.


Dealing with the Noisy Trading Days


Let’s say you have developed a trading system that works well on quiet days. What next? There are a couple of ways to go:


(i) Deploy the model only on quiet trading days; stay out of the market on volatile days; o


(ii) Develop a separate trading system to handle volatile market conditions.


Which approach is better? It is likely that the system you develop for trading quiet days will outperform any system you manage to develop for volatile market conditions. So, arguably, you should simply trade your best model when volatility is muted and avoid trading at other times. Any other solution may reduce the overall risk-adjusted return. But that isn’t guaranteed to be the case – and, in fact, I will give an example of systems that, when combined, will in practice yield a higher information ratio than any of the component systems.


Deploying the Trading Systems


The astute reader is likely to have noticed that I have “cheated” by using forward information in the model development process. In building a trading system based only on data drawn from low-volatility days, I have assumed that I can somehow know in advance whether the market is going to be volatile or not, on any given day. Of course, I don’t know for sure whether the upcoming session is going to be volatile and hence whether to deploy my trading system, or stand aside. So is this just a purely theoretical exercise? No, it’s not, for the following reasons.


The first reason is that, unlike the underlying asset market, the market volatility process is, by comparison, highly predictable. This is due to a phenomenon known as “long memory”, i. e. very slow decay in the serial autocorrelations of the volatility process. What that means is that the history of the volatility process contains useful information about its likely future behavior. [There are several posts on this topic in this blog – just search for “long memory”]. So, in principle, one can develop an effective system to forecast market volatility in advance and hence make an informed decision about whether or not to deploy a specific model.


But let’s say you are unpersuaded by this argument and take the view that market volatility is intrinsically unpredictable. Does that make this approach impractical? De ningún modo. You have a couple of options:


You can test the model built for quiet days on all the market data, including volatile days. It may perform acceptably well across both market regimes.


For example, here are the results of a backtest of the model described above on all the market data, including volatile and quiet periods, from 2004-2015. While the performance characteristics are not quite as good, overall the strategy remains very attractive.


Another approach is to develop a second model for volatile days and deploy both low - and high-volatility regime models simultaneously. The trading systems will interact (if you allow them to) in a highly nonlinear and unpredictable way. It might turn out badly – but on the other hand, it might not! Here, for instance, is the result of combining low - and high-volatility models simultaneously for the Emini futures and running them in parallel. The result is an improvement (relative to the low volatility model alone), not only in the annual rate of return (21% vs 17.8%), but also in the risk-adjusted performance, profit factor and average trade.


CONCLUSIÓN


Separating the data into multiple subsets representing different market regimes allows the system developer to amplify the signal:noise ratio, increasing the effectiveness of his alpha factors. Potentially, this allows important features of the underlying market dynamics to be captured in the model more easily, which can lead to improved trading performance.


Models developed for different market regimes can be tested across all market conditions and deployed on an everyday basis if shown to be sufficiently robust. Alternatively, a meta-strategy can be developed to forecast the market regime and select the appropriate trading system accordingly.


Finally, it is possible to achieve acceptable, or even very good results, by deploying several different models simultaneously and allowing them to interact, as the market moves from regime to regime.


The popular VIX blog Vix and More evaluates the performance of the VIX ETFs (actually ETNs) and concludes that all of them lost money in 2015. Yes, both long volatility and short volatility products lost money!


Source: Vix and More


By contrast, our Volatility ETF strategy had an exceptional year in 2015, making money in every month but one:


How to Profit in a Down Market


How do you make money when every product you are trading loses money? Obviously you have to short one or more of them. But that can be a very dangerous thing to do, especially in a product like the VIX ETNs. Volatility itself is very volatile – it has an annual volatility (the volatility of volatility, or VVIX) that averages around 100% and which reached a record high of 212% in August 2015.


The CBOE VVIX Index


Selling products based on such a volatile instrument can be extremely hazardous – even in a downtrend: the counter-trends are often extremely violent, making a short position challenging to maintain.


Relative value trading is a more conservative approach to the problem. Here, rather than trading a single product you trade a pair, or basket of them. Your bet is that the ETFs (or stocks) you are long will outperform the ETFs you are short. Even if your favored ETFs declines, you can still make money if the ETFs you short declines even more.


This is the basis for the original concept of hedge funds, as envisaged by Alfred Jones in the 1940’s, and underpins the most popular hedge fund strategy, equity long-short. But what works successfully in equities can equally be applied to other markets, including volatility. In fact, I have argued elsewhere that the relative value (long/short) concept works even better in volatility markets, chiefly because the correlations between volatility processes tend to be higher than the correlations between the underlying asset processes (see The Case for Volatility as an Asset Class ).


2015 proved to be an extremely difficult year for volatility strategies generally. The reasons are not difficult to fathom: the sea-change in equity markets resulting from the Fed’s cessation of quantitative easing (for now) produced a no-less dramatic shift in the volatility term structure. During the summer months spot and front-month volatility surged, producing an inverted term structure in VIX futures and causing havoc for a great number of volatility carry strategies that depend on the usual downward-sloping shape of the forward volatility curve.


Performance results for many volatility strategies over the course of the year reflect the difficulties of managing these market gyrations. The blog site Volatility Made Simple. which charts the progress of 24 volatility strategies, reported the year-end results as follows:


While these strategies are hardly the “best in class”, the fact that all but a handful reported substantial losses for the year speaks volumes about the the challenges faced by volatility strategies during periods of market turbulence. Simplistic approaches, such as volatility carry strategies, will tend to blow up when volatility surges and the curve inverts and the losses incurred during such episode will often undo most or all of the gains accrued in prior months, or years.


Although our own volatility ETF portfolio on any given day might bear a passing resemblance to some of these strategies, in fact the logic behind it is considerably more sophisticated. We take an options-theoretic approach to pricing leveraged ETFs, which allows us to exploit the potential for selling expensive Theta against cheap Gamma, while at the same time affording opportunities to take advantage of the convexity of levered ETF products (for a more detailed explanation, see Investing in Leveraged ETFs – Theory and Practice ). We also mix together multiple models using different a wide range of data frequencies, in both time and trade space, and apply a model management system to optimize the result in real time (the reader is referred to my post on Meta-Strategies for more on this topic).


So much for theory – how did all this work out in practice in 2015? The following are some summary results for our volatility ETF strategy, which we operate for several managed accounts.


The substantial increase in annual returns during 2015 is largely a reflection of the surge in volatility during the summer months (especially July), although it is interesting to note, too, that performance also improved on a risk-adjusted basis during the year and currently stands at around 3.60. It is perhaps unlikely that the strategy will continue performing at these elevated levels in 2016, although volatility shows no sign of moderating yet. Our aim is to produce returns of 30% to 40% during a normal year, although 2016 could prove to be above-average, especially if the equity market corrects.


Jeff Swanson’s Trading System Success web site is often worth a visit for those looking for new trading ideas.


A recent post Seasonality S&P Market Session caught my eye, having investigated several ideas for overnight trading in the E-minis. Seasonal effects are of course widely recognized and traded in commodities markets, but they can also apply to financial products such as the E-mini. Jeff’s point about session times is well-made: it is often worthwhile to look at the behavior of an asset, not only in different time frames, but also during different periods of the trading day, day of the week, or month of the year.


Jeff breaks the E-mini trading session into several basic sub-sessions:


“Pre-Market” Between 530 and 830


“Open” Between 830 and 900


“Morning” Between 900 though 1130


“Lunch” Between 1130 and 1315


“Afternoon” Between 1315 and 1400


“Close” Between 1400 and 1515


“Post-Market” Between 1515 and 1800


“Night” Between 1800 and 530


In his analysis Jeff’s strategy is simply to buy at the open of the session and close that trade at the conclusion of the session. This mirrors the traditional seasonality study where a trade is opened at the beginning of the season and closed several months later when the season comes to an end.


Evaluating Overnight Session and Seasonal Effects


The analysis evaluates the performance of this basic strategy during the “bullish season”, from Nov-May, when the equity markets traditionally make the majority of their annual gains, compared to the outcome during the “bearish season” from Jun-Oct.


None of the outcomes of these tests is especially noteworthy, save one: the performance during the overnight session in the bullish season:


The tendency of the overnight session in the E-mini to produce clearer trends and trading signals has been well documented. Plausible explanations for this phenomenon are that:


(a) The returns process in the overnight session is less contaminated with noise, which primarily results from trading activity; Y / o


(b) The relatively poor liquidity of the overnight session allows participants to push the market in one direction more easily.


Either way, there is no denying that this study and several other, similar studies appear to demonstrate interesting trading opportunities in the overnight market.


That is, until trading costs are considered. Results for the trading strategy from Nov 1997-Nov 2015 show a gain of $54,575, but an average trade of only just over $20:


Designing a Seasonal Trading Strategy for the Overnight Session


At this point an academic research paper might conclude that the apparently anomalous trading profits are subsumed within the bid-offer spread. But for a trading system designer this is not the end of the story.


If the profits are insufficient to overcome trading frictions when we cross the spread on entry and exit, what about a trading strategy that permits market orders on only the exit leg of the trade, while using limit orders to enter? Total trading costs will be reduced to something closer to $17.50 per round turn, leaving a net profit of almost $6 per trade.


Of course, there is no guarantee that we will successfully enter every trade – our limit orders may not be filled at the bid price and, indeed, we are likely to suffer adverse selection – i. e. getting filled on every losing trading, while missing a proportion of the winning trades.


On the other hand, we are hardly obliged to hold a position for the entire overnight session. Nor are we obliged to exit every trade MOC – we might find opportunities to exit prior to the end of the session, using limit orders to achieve a profit target or cap a trading loss. In such a system, some proportion of the trades will use limit orders on both entry and exit, reducing trading costs for those trades to around $5 per round turn.


The key point is that we can use the seasonal effects detected in the overnight session as a starting point for the development for a more sophisticated trading system that uses a variety of entry and exit criteria, and order types.


The following shows the performance results for a trading system designed to trade 30-minute bars in the E-mini futures overnight session during the months of Nov to May. The strategy enters trades using limit prices and exits using a combination of profit targets, stop loss targets, and MOC orders.


Data from 1997 to 2010 were used to design the system, which was tested on out-of-sample data from 2011 to 2013. Unseen data from Jan 2014 to Nov 2015 were used to provide a further (double blind) evaluation period for the strategy.


In this post I am going to take a look at what an investor can do to improve a hedge fund investment through the use of dynamic capital allocation. For the purposes of illustration I am going to use Cantab Capital’s Aristarchus program – a quantitative fund which has grown to over $3.5Bn in assets under management since its opening with $30M in 2007 by co-founders Dr. Ewan Kirk and Erich Schlaikjer.


I chose this product because, firstly, it is one of the most successful quantitative funds in existence and, secondly, because as a CTA its performance record is publicly available.


Cantab’s Aristarchus Fund


Cantab’s stated investment philosophy is that algorithmic trading can help to overcome cognitive biases inherent in human-based trading decisions, by exploiting persistent statistical relationships between markets. Taking a multi-asset, multi-model approach, the majority of Cantab’s traded instruments are liquid futures and forwards, across currencies, fixed income, equity indices and commodities.


Let’s take a look at how that has worked out in practice:


Whatever the fund’s attractions may be, we can at least agree that alpha is not amongst them. A Sharpe ratio of < 0.5 (I calculate to be nearer 0.41) is hardly in Renaissance territory, so one imagines that the chief benefit of the product must lie in its liquidity and low market correlation. Uncorrelated it may be, but an investor in the fund must have extremely deep pockets – and a very strong stomach – to handle the 34% drawdown that the fund suffered in 2013.


Improving the Aristarchus Fund Performance


If we make the assumption that an investment in this product is warranted in the first place, what can be done to improve its performance characteristics? We’ll look at that question from two different perspectives – the investor’s and the manager’s.


Firstly, from the investor’s perspective, there are relatively few options available to enhance the fund’s contribution, other than through diversification. One other possibility available to the investor, however, is to develop a program for dynamic capital allocation. This requires the manager to be open to allowing significant changes in the amount of capital to be allocated from month to month, or quarter to quarter, but in a liquid product like Aristarchus some measure of flexibility ought to be feasible.


An analysis of the fund’s performance indicates the presence of a strong dependency in the returns process. This is not at all unusual. Often investment strategies have a tendency to mean-revert: a negative dependency in which periods of poor performance tend to be followed by positive performance, and vice versa. CTA strategies such as Aristarchus tend to be trend-following, and this can induce positive dependency in the strategy returns process, in which positive months tend to follow earlier positive months, while losing months tend to be followed by further losses. This is the pattern we find here.


Consequently, rather than maintaining a constant capital allocation, an investor would do better to allocate capital dynamically, increasing the amount of capital after a positive period, while decreasing the allocation after a period of losses. Let’s consider a variation of this allocation plan, in which the amount of allocated capital is increased by 70% when the last monthly equity value exceeds the quarterly moving average, while the allocation is reduced to zero when the last month’s equity falls below the average. A dynamic capital allocation plan as simple as this appears to produce a significant improvement in the overall performance of the investment:


The slight increase in annual volatility in the returns produced by the dynamic capital allocation model is more than offset by the 412bp improvement in the CAGR. Consequently, the Sharpe Ratio improves from o.41 to 0.60.


Nor is this by any means the entire story: the dynamic model produces lower average drawdowns (7.93% vs. 8.52%) and, more importantly, reduces the maximum drawdown over the life of the fund from a painful 34.87% to more palatable 23.92%.


The much-improved risk profile of the dynamic allocation scheme is reflected in the Return/Drawdown Ratio, which rises from 2.44 to 6.52.


Note, too, that the average level of capital allocated in the dynamic scheme is very slightly less than the original static allocation. In other words, the dynamic allocation technique results in a more efficient use of capital, while at the same time producing a higher rate of risk-adjusted return and enhancing the overall risk characteristics of the strategy.


Improving Fund Performance Using a Meta-Strategy


So much for the investor. What could the manager to do improve the strategy performance? Of course, there is nothing in principle to prevent the manager from also adopting a dynamic approach to capital allocation, although his investment mandate may require him to be fully invested at all times.


Assuming for the moment that this approach is not available to the manager, he can instead look into the possibilities for developing a meta-strategy. As I explained in my earlier post on the topic:


A meta-strategy is a trading system that trades trading systems. The idea is to develop a strategy that will make sensible decisions about when to trade a specific system, in a way that yields superior performance compared to simply following the underlying trading system.


It turns out to be quite straightforward to develop such a meta-strategy, using a combination of stop-loss limits and profit targets to decide when to turn the strategy on or off. In so doing, the manager is able to avoid some periods of negative performance, producing a significant uplift in the overall risk-adjusted return:


Conclusión


Meta-strategies and dynamic capital allocation schemes can enable the investor and the investment manager to improve the performance characteristics of their investment and investment strategy, by increasing returns, reducing volatility and the propensity of the strategy to produce substantial drawdowns.


We have demonstrated how these approaches can be applied successfully to Cantab’s Aristarchus quantitative fund, producing substantial gains in risk adjusted performance and reductions in the average and maximum drawdowns produced over the life of the fund.


Several readers responded to my recent invitation to send me details of their trading strategies, to see if I could develop a meta-strategy with superior overall performance characteristics (see original post here ).


One reader sent me the following strategy in EUR futures, with a promising-looking equity curve over the period from 2009-2014.


I have no information about the underlying architecture of the strategy, but a performance analysis shows that it trades approximately once per day, with a win rate of 49%, a PNL per trade of $4.79 and a IR estimated to be 2.6.


Designing the Meta-Strategy


My task was to see if I could design a meta-strategy that would “trade” the underlying strategy, i. e. produce signals to turn the underlying strategy on or off. Here we are designing a long-only strategy, where a “buy” trade represents the signal to turn the underlying strategy on, while an exit trade from the meta-strategy turns the underlying strategy off.


The meta-strategy is built in trade time rather than calendar time – we don’t want the meta-strategy trying to turn the underlying trading strategy on or off while it is in the middle of a trade. The data we use in the design exercise is the trade-by-trade equity curve, including the date and timestamp and the open, high, low and close values of the equity curve for each trade.


No allowance for trading costs is necessary since all of the transaction costs are baked into the PNL of the underlying strategy – there are no additional costs entailed in turning the strategy on or off, as long as we do that in a period when there is no open position.


In designing the meta-strategy I chose simply to try to improve the overall net PNL. This is a good starting point, but one would typically go on to consider a variety of other possible criteria, including, for example, Net Profit / Av. Max Drawdown, Net Profit / Flat Time, MAR Ratio, Sharpe Ratio, Kelly Criterion, or a combination of them.


I used 80% of the trade data to design and test the strategy and reserved 20% of the data to test the performance of the meta-strategy out-of-sample.


Resultados


The analysis summarized below shows a clear improvement in the overall performance of the meta-strategy, compared to the underlying strategy. Net PNL and Average Trade are increased by 40%, while the trade standard deviation is noticeably reduced, leading to a higher IR of 5.27 vs 3.10. The win rate increases from around 2/3 to over 90%.


Although not as marked, the overall improvement in strategy performance metrics during the out-of-sample test period is highly significant, both economically and statistically.


Note that the Meta-strategy is a long-only strategy in which each “trade” is a period in which the system trades the underlying EUR futures strategy. So in fact, in the Meta-strategy, each trade represents a number of successive underlying, real trades (which of course may be long or short).


Put another way, the Meta-Strategy turns the underlying trading strategy on and off 276 times in total.


Conclusión


It is feasible to design a meta-strategy that improves the overall performance characteristics of an underlying trading strategy, by identifying the higher-value trades and turning the strategy on or off based on forecasts of its future performance.


No knowledge is required of the mechanics of the underlying trading strategy in order to design a profitable Meta-strategy.


Meta-strategies have been successfully applied to problems of capital allocation, where decisions are made on a regular basis about how much capital to allocate to multiple trading strategies, or traders.


" /> A Computer Science Perspective


Contenido


Introducción


Part 1 - SIGNAL ANALYSIS


Signals


The Spectrum of Periodic Signals


The Frequency Domain


ruido


Part 2 - SIGNAL PROCESSING SYSTEMS


Sistemas


Filters


Nonfilters


Correlation


Adaptation


Biological Signal Processing


Part 3 - ARCHITECTURES AND ALGORITHMS


Graphical Techniques


Spectral Analysis


The Fast Fourier Transform


Digital Filter Implementation


Function Evaluation Algorithms


Digital Signal Processors


Part 4 - APPLICATIONS


Communications Signal Processing


Speech Signal Processing


Appendix (Whirlwind Review of Mathematics)


Errata


Excerpts


One PDF format excerpt from each part of the book


Screenshots


Downloads


Getting Help


There are three sources of help (beyond the user manual, of course).


A Google Group has been set up for support. It is open to anyone to join and read, but you must be a member to post, and posts are moderated (necessary after the spam-bots took over the old sourceforge mailing list). The website for the URL is http://groups. google. com/group/freemat. The e-mail address for the group is freemat@googlegroups. com.


Bug reports should be filed here: Report a Bug


Feature requests should be filed here: Request a Feature


Documentation


The manual is available as a PDF here


Built-in interactive help (Online Help), from the FreeMat Console by typing:


Tutorials on FreeMat are available here


The FreeMat Wiki is here


The FreeMat Blog is here


More of a "wish they were FAQ".


Q. What is FreeMat?


FreeMat is an environment for rapid engineering and scientific processing. It is similar to commercial systems such as MATLAB from Mathworks and IDL from Research Systems, but is Open Source. It is free as in speech and free as in beer.


Previous versions of FreeMat were released under MIT licenses. The current version is released under GPL. There are a number of great tools that are available to GPL-ed code (e. g. Qt, FFTW, FFCALL), and FreeMat is now one of them.


Q. Why another MATLAB clone? Have you heard of Octave, Scilab, etc.


¡Sí! FreeMat is chartered to go beyond MATLAB to include features such as a codeless interface to external C/C++/FORTRAN code, parallel/distributed algorithm development (via MPI), and advanced volume and 3D visualization capabilities. As for the open source alternatives, try them out and decide for yourself. Who said choice was a bad thing?


Q. Is FreeMat 100% compatible with MATLAB? What about IDL?


No. FreeMat supports roughly 95% (a made up statistic) of the features in MATLAB. The following table summarizes how FreeMat stacks up against MATLAB and IDL. Because we like to lead with the positive, here are the features in that are supported:


N-dimensional array manipulation (by default, N is limited to 6)


Support for 8,16, and 32 bit integer types (signed and unsigned), 32 and 64 bit floating point types, and 64 and 128 bit complex types.


Built in arithmetic for manipulation of all supported data types.


Support for solving linear systems of equations via the divide operators.


Eigenvalue and singular value decompositions


Full control structure support (including, for, while, break, continue, etc.)


2D plotting and image display


Heterogeneous array types (called "cell arrays" in MATLAB-speak) fully supported


Full support for dynamic structure arrays


Split-radix based FFT support


Pass-by-reference support (an IDL feature)


Keyword support (an IDL feature)


Codeless interface to external C/C++/FORTRAN code


Native Windows support


Native sparse matrix support


Native support for Mac OS X (no X11 server required).


Function pointers (eval and feval are fully supported)


Classes, operator overloading


3D Plotting and visualization via OpenGL


Handle-based graphics


3D volume rendering capability (via VTK)


Here are the list of major MATLAB features not currently supported:


GUI/Widgets


Finally the list of features that are in progress (meaning they are in the development version or are planned for the near future):


Widgets/GUI building


FreeMat-to-MEX interface for porting MATLAB MEX files.


If you feel very strongly that one or more MATLAB features are missing that would be useful to have in FreeMat, you can either add it yourself or try and convince someone else (e. g. me) to add it for you. As for IDL, FreeMat is not compatible at all with IDL (the syntax is MATLAB-based), but a few critical concepts from IDL are implemented, including pass by reference and keywords.


Q. What platforms are supported?


Currently, Windows, Linux and Mac OS X are supported platforms. Other UNIX environments (such as IRIX/SOLARIS) may work. FreeMat essentially requires GNU gcc/g++ and LLVM/CLANG to build. The Win32 build requires MINGW32. I don't know if FreeMat will work with Windows 98/95/ME or NT4 as I don't have access to any of these platforms. A native port to Mac OS X is now available.


Q. How do I get it?


Click on the Downloads link here (or on the navigation bar on the left). Installers are available for Windows and Mac OS X, and source and binary packages are available for Linux.


Q. I found a bug! Ahora que?


¡Felicitaciones! Please file a bug report here. FreeMat is a fairly complicated program. Simply saying "it crashed" is not particularly helpful. If possible, please provide a short function or script that reproduces the problem. That will go a long way towards helping us figure out the problem. Also, the bug tracking feature of SourceForge will allow you to put in bugs anonymously, but please don't! Anonymous bug reports are difficult to follow up on.


Q. Where is function xyz?


There are a number of basic functions that are missing from FreeMat's repetoire. They will be added as time goes on. If there is a particular function you would like to see, either write it yourself or put in an RFE (Request For Enhancement) here.


Q. Who wrote FreeMat and why?


FreeMat has been in development by a group of volunteers for nearly a decade. The core team is listed here


MATLAB Examples


Basic Matlab Code Here you can find examples on different types of arithmetic. exponential. trigonometry and complex number operations handled easily with Matlab code.


Example: Simple Vector Algebra On this page we expose how simple it is to work with vector algebra. within Matlab.


Example: MATLAB Plots In this group of examples, we create several cosine MATLAB plots. work with different resolution and plot parameters.


Example: MATLAB programming (Script Files) In this example, we program the plotting of two concentric circles and mark the center point with a black square. We use polar coordinates in this case (for a variation).


Example: A custom-made Matlab function Even though Matlab has plenty of useful functions, in this example we develop a custom-made Matlab function. We have one input value and two output values to transform a given number in both Celsius and Farenheit degrees.


Ex. Matrix manipulation In this section we study and experiment with matrix manipulation and boolean algebra. This tutorial creates some simple matrices and then combines them to form new ones of higher dimensions. We also extract data from certain rows or columns to form matrices of lower dimensions.


Ex. Dot Product The dot product is a scalar number and so it is also known as the scalar or inner product. In a real vector space, the scalar product between two vectors.


Ex. Cross Product In this example, we are going to write a function to find the cross product of two given vectors u and v. If u = [ u1 u2 u3 ] and v = [ v1 v2 v3 ].


Complex Numbers The unit of imaginary numbers is root of -1 and is generally designated by the letter i (or j ). Many laws which are true for real numbers are true for imaginary numbers as well. Thus, in this Matlab example.


Future Value of investment This program is an example of a financial application in Matlab. It calculates the future value of an investment when interest is a factor. It is necessary to provide the amount of the initial investment.


Recursion Recursion is a kind of tricky and smart construction which allows a function to call itself. The Matlab programming language supports it.


Electricity Cost Calculation We’ll now implement a kind of electricity cost calculator with Matlab. This calculator will help you estimate the cost of operating any given electrical device or appliance at home, based on the average kilowatt-hours (KWH) used.


Palindromes - working with indices and ascii A palindrome is a phrase, word, number or other sequence of characters that can be read the same way in either direction.


Introduction to MATLAB


Originally created by Kristian Sandberg Department of Applied Mathematics University of Colorado


Updated for compatibility with Release 13 by Grady Wright Department of Mathematics University of Utah


Gol


The goal of this tutorial is to give a brief introduction to the mathematical software MATLAB. After completing the worksheet you should know how to start MATLAB, how to use the elementary functions in MATLAB and how to use MATLAB to plot functions.


What is MATLAB?


MATLAB is widely used in all areas of applied mathematics, in education and research at universities, and in the industry. MATLAB stands for MATrix LABoratory and the software is built up around vectors and matrices. This makes the software particularly useful for linear algebra but MATLAB is also a great tool for solving algebraic and differential equations and for numerical integration. MATLAB has powerful graphic tools and can produce nice pictures in both 2D and 3D. It is also a programming language, and is one of the easiest programming languages for writing mathematical programs. MATLAB also has some tool boxes useful for signal processing, image processing, optimization, etc.


How to start MATLAB


Mac: Double-click on the icon for MATLAB.


PC: Choose the submenu "Programs" from the "Start" menu. From the "Programs" menu, open the "MATLAB" submenu. From the "MATLAB" submenu, choose "MATLAB".


Unix: At the prompt, type matlab.


You can quit MATLAB by typing exit in the command window.


The MATLAB environment


Note: From now on an instruction to press a certain key will be denoted by < >, e. g. pressing the enter key will be denoted as <enter>. Commands that should be typed at the prompt, will be written in courier font.


The MATLAB environment (on most computer systems) consists of menus, buttons and a writing area similar to an ordinary word processor. There are plenty of help functions that you are encouraged to use. The writing area that you will see when you start MATLAB, is called the command window . In this window you give the commands to MATLAB. For example, when you want to run a program you have written for MATLAB you start the program in the command window by typing its name at the prompt. The command window is also useful if you just want to use MATLAB as a scientific calculator or as a graphing tool. If you write longer programs, you will find it more convenient to write the program code in a separate window, and then run it in the command window (discussed in Intro to programming ).


In the command window you will see a prompt that looks like >>. You type your commands immediately after this prompt. Once you have typed the command you wish MATLAB to perform, press <enter>. If you want to interupt a command that MATLAB is running, type <ctrl> + <c>.


The commands you type in the command window are stored by MATLAB and can be viewed in the Command History window. To repeat a command you have already used, you can simply double-click on the command in the history window, or use the <up arrow> at the command prompt to iterate through the commands you have used until you reach the command you desire to repeat.


Useful functions and operations in MATLAB


Using MATLAB as a calculator is easy.


Example: Compute 5 sin(2.5 3-pi )+1/75. In MATLAB this is done by simply typing


at the prompt. Be careful with parantheses and don't forget to type * whenever you multiply!


Note that MATLAB is case sensitive . This means that MATLAB knows a difference between letters written as lower and upper case letters. For example, MATLAB will understand sin(2) but will not understand Sin(2) .


Here is a table of useful operations, functions and constants in MATLAB.


Operation, function or constant


Compute the following expressions using MATLAB:


3cos(pi)


1+1+1/2+1/6+1/24-e


ln (1000+2 pi-2 )


e i pi


The number of combinations in which 12 persons can stand in line. (Hint: Use factorials.)


Obtaining Help on MATLAB commands


To obtain help on any of the MATLAB commands, you simply need to type


En el símbolo del sistema. For example, to obtain help on the gamma function, we type at the command prompt:


Try this now. You may also get help about commands using the "Help Desk", which can be accessed by selecting the MATLAB Help option under the Help menu.


Note that the description MATLAB returns about the command you requested help on contains the command name in ALL CAPS. This does not mean that you use this command by typing it in ALL CAPS. In MATLAB, you almost always use all lower case letters when using a command.


Variables in MATLAB


We can easily define our own variables in MATLAB. Let's say we need to use the value of 3.5sin(2.9) repeatedly. Instead of typing 3.5*sin(2.9) over and over again, we can denote this variable as x by typing the following:


(Please try this in MATLAB.) Now type


and observe what happens. Note that we did not need to declare x as a variable that is supposed to hold a floating point number as we would need to do in most programming languages.


Often, we may not want to have the result of a calculation printed-out to the command window. To supress this output, we put a semi-colon at the end of the command; MATLAB still performs the command in "the background". If you defined x as above, now type


and observe what happened.


In many cases we want to know what variables we have declared. We can do this by typing whos . Alternatively, we can view the values by openning the "Workspace" window. This is done by selecting the Workspace option from the View menu. If you want to erase all variables from the MATLAB memory, type clear . To erase a specific variable, say x . type clear x . To clear two specific variables, say x and y . type clear x y . that is separate the different variables with a space. Variables can also be cleared by selecting them in the Workspace window and selecting the delete option.


Vectors and matrices in MATLAB


We create a vector in MATLAB by putting the elements within [] brackets.


Example: x=[ 1 2 3 4 5 6 7 8 9 10]


We can also create this vector by typing x=1:10. The vector ( 1 1.1 1.2 1.3 1.4 1.5 ) can be created by typing x=[ 1 1.1 1.2 1.3 1.4 1.5 ] or by typing x=1:0.1:1.5.


Matrices can be created according to the following example. The matrix A= is created by typing


A=[1 2 3 ; 4 5 6; 7 8 9] .


i. e. rows are separated with semi-colons. If we want to use a specific element in a vector or a matrix, study the following example:


A=[ 1 2 3 ; 4 5 6 ; 7 8 9]


Here we extracted the second element of the vector by typing the variable and the position within parantheses. The same principle holds for matrices; the first number specifies the row of the matrix, and the second number specifies the column of the matrix. Note that in MATLAB the first index of a vector or matrix starts at 1, not 0 as is common with other programming languages .


If the matrices (or vectors which are special cases of a matrices) are of the same dimensions then matrix addition, matrix subtraction and scalar multiplication works just like we are used to.


and observe what happens.


If want to apply an operation such as squaring each element in a matrix we have to use a dot. before the operation we wish to apply. Type the following commands in MATLAB.


A=[1 2 3 ; 4 5 6 ; 7 8 9 ]


and observe the result. The dot allows us to do operations elementwise . All built-in functions such as sin, cos, exp and so on automatically act elementwise on a matrix. Tipo


and observe the result.


How to plot with MATLAB


There are different ways of plotting in MATLAB. The following two techniques, illustrated by examples, are probably the most useful ones.


Example 1: Plot sin(x 2 ) on the interval [-5,5]. To do this, type the following:


and observe what happens.


Example 2: Plot exp(sin(x)) on the interval [- p. p ]. To do this, type the following:


and observe what happens. The command linspace creates a vector of 101 equally spaced values between - p and p (inclusive).


Ocassionally, we need to plot values that vary quite differently in magnitude. In this case, the regular plot command fails to give us an adequate graphical picture of our data. Instead, we need a command that plots values on a log scale. MATLAB has 3 such commands: loglog , semilogx . and semilogy . Use the help command to see a description of each function. As an example of where we may want to use one of these plotting routines, consider the following problem:


Example 3: Plot x 5/2 for x = 10 -5 to 10 5. To do this, type the following:


and observe what happens. Now type the following command:


The command logspace is similar to linspace . however it creates a vector of 101 points lograthmically equally distributed between 10 -5 and 10 5 .


The following commands are useful when plotting:


MATLAB Filter Design Wizard for AD9361


The AD9361 Filter Design Wizard is a small MATLAB App, which can be used to design transmitter and receiver FIR filters, which take into account the magnitude and phase response from other analog and digital stages in the filter chain. This tool provides not only a general purpose low pass filter designer, but also magnitude and phase equalization for other stages in the signal path.


With this wizard, users can perform the following tasks:


Choose correct digital filters to use for receive and transmit.


Design the programmable FIR filters, get the filter coefficients and save them in a. ftr file, which can be directly loaded into the hardware.


Examine the independent response of each filter, and the composite response of all the filters, including both digital and analog filters.


Videos


Here is a brief introduction on why everyone needs to, and how to use this tool.


Downloads


In order to run the wizard, your MATLAB license needs to include the following components:


MATLAB (R2012b or higher version is required)


Signal Processing Toolbox


DSP System Toolbox


In addition, in order to generate HDL, your MATLAB license needs to include the following component:


In order to get the wizard, please go to Analog Devices GitHub. Different releases of AD9361 Filter Design Wizard and their source files can be found here:


For each release, the wizard is available as a MATLAB App installer (mlappinstall) or in archive form (zip or tarball).


If using a checkout or unpacked archive, the application can be run in one of two ways:


Right click “AD9361_Filter_Wizard. fig” and select “Open in GUIDE” to open the figure. Then type “Ctrl+T” to run the figure.


Within the application directory run the command “AD9361_Filter_Wizard” from the MATLAB command line.


The Filter Design Wizard has been applied in the SimRF models of AD9361, provided by MathWorks as a hardware support package. Download this version if to be used with the AD9361 SimRF model:


To learn more about AD9361 modeling and to download the Tx and Rx models, the hardware support package can be found here:


Use MATLAB App


Generally speaking, there are two ways you can use the design wizard:


MATLAB App: A graphical user interface is created to facilitate the process of filter design. Users can easily define the input, observe the design performance, and specify the way they want to save the results. This is a more straightforward method to use the wizard.


MATLAB function: The link to design functions can be found in the Download section. They are MATLAB functions, which users can launch from the MATLAB command window by properly defining the input parameters. Using this way, users have more control of the internal design process.


In this section, we are going to elaborate on the first option - MATLAB App.


Basic Functions


After you launch the MATLAB App, there shows a drop-down list in “Device Settings”, which includes the default parameter profiles for several widely used LTE applications. You can move the highlight bar to the one you would like to start with.


This table is stored in github.


The LTE Release-8 physical layer specification actually supports 105 different bandwidth options (not just the 6 shown above). Occupied RF Bandwidth from between 1.08MHz to 19.8MHz with 180kHz steps complies with the spec. and while these filters can be designed (manually) they are not included as defaults.


In addition, you can also save your favorite parameter settings in this list, such as “foobar (Rx & Tx)” shown in the figure.


Assume that you choose the “LTE10 (Rx & Tx)” profile, after you click it, all the parameters are filled in automatically for you, as shown in the figure below. There are three categories of input parameters: magnitude specifications, frequency specifications, and AD936x clock settings. If you are satisfied with all the parameters, you can go ahead and click “Design Filter” to start the design.


As soon as the design process completes, you will see a magnitude plot displayed on the top half of the GUI. where the specified Fpass, Fstop, Apass and Astop are highlighted in the plot. The x-axis is from 0 to half of the data rate. Below it, on the right, you will see a “Filter Results” portion, where the actual Apass, Astop, the number of FIR taps and the pass band group delay variance are shown. From these numbers, you will get an idea whether the design meets the requirements quantitatively.


If you are interested in more details of the design performance, you can click the “FVTool” buttons left to “Filter Results” to launch the Filter Visualization Tool (fvtool) provided by The MathWorks. For your convenience, we provide this tool on two different frequency scales. One is from 0 Hz to half of the data rate, the other is from 0 Hz up to half of the converter rate.


If you are mainly interested in pass band, click the top button, it will open the following three figures:


Magnitude response of half band filters and HB + designed FIR filter.


Magnitude Response of the designed FIR only. Besides the magnitude response, you can use the toolbar on the upper left corner (as highlighted in square) to navigate to the other responses, including phase response, group delay response, impulse response, poles/zeros and etc. It will enable you to have a better understanding of the designed FIR.


Overall group delay on pass band. For your convenience, the group delay variance has been calculated and indicated on the figure.


If you are interested in the whole frequency band, click the bottom button, it will open the following figure:


Magnitude response of half band filters and HB + designed FIR filter. You can easily have a closer observation on certain portion of the magnitude response by using the “Zoom In/Out” functions on the toolbar (as highlighted in square).


After the deeper analysis, if you are satisfied with the results and would like to save the designed FIR filter, there are several options you can choose from. These options are in the “Controls” portion on the upper left corner of the GUI.


Save object and data to workspace: If you will use the designed filter chain with some other MATLAB functions or Simulink models, you can simply leave it in the workspace by clicking “Save to Workspace” button, as shown in the figure below. After click this button and exit the App, you will find a mfilt. cascade object named “AD9361_Tx_Filter_object” or “AD9361_Rx_Filter_object” depending on whether it is on Tx or Rx.


When you click “Save to Workspace”, besides the filter object, there is also a data structure saved to workspace, which will initialize the SimRF model of FMCOMMS2. The data structure is named “FMCOMMS2_TX_Model_init” or “FMCOMMS2_RX_Model_init”.


Save coefficients to a ftr file: If you will use the designed FIR filter with the IIO Oscilloscope application 2). you can save the FIR coefficients by clicking “Coefficients to ftr File” button, as shown in the figure below.


You need to have designed both the Transmit and Receive filters before you can use the “Coefficients to ftr File” button. Otherwise, this button is grayed out.


After that, a window will pop up, asking you to specify the name and the location of the ftr file, as shown in the figure below.


If you plan to use the Filter Design Wizard with a zyqn-based platform, there are several options available that will facilitate this process. These options are in the “Target (Zynq Board)” portion of the GUI.


Connect to the target: In the IP box, you should input the IP address of the target. In Linux system, it can be easily found by the “ifconfig” command. Then, click the “Connect to Target” button.


Read clock settings: If a target is detected at the specified IP address, the “Read Clock Settings” button will show up, as shown in the picture below. If you want to overwrite the current clock settings with the ones belong to the target, you can click this button.


Save FIR coefficients to the target: If an FIR filter is designed for the target, the FIR coefficients can be saved directly to the target by clicking the “Coefficients to Target” button, as shown in the picture below.


Advanced Functions


The functions introduced so far provide a basic infrastructure to design and observe the FIR filter. If you would like to have more control and functionality, you can turn on the “Advanced” option, as shown in the figure below, which provides you with several more advanced options.


Phase Equalization: If you would like to have the FIR filter do phase equalization, you can turn on the “Phase Equalization” option, as shown in the figure below. The main purpose of the phase equalization is to reduce the pass band group delay *variance* brought by analog filters, digital filters and FIR filter, so that for signals at different frequencies, they will be delayed by an almost identical amount when going through the filter chain.


After you click “Design Filter”, the phase equalization part of the FIR design file is executed, and you will get an updated FIR filter design. Comparing the group delay variance in the Results portion, it is decreased from 16.6 ns to 1.52 ns with phase equalization. Also note that when the design process completes, there is an updated target delay number (this number is 0 before phase equalization) shown in the “Filter Options” portion.


Please note the phase equalization process may take a few minutes, depending on the performance of your PC, since it tries to find a best target delay which yields the minimum group delay variance.


Astop (FIR): This is a new parameter in magnitude specifications. It specifies the attenuation of FIR (not the composite response), which corresponds to the “dBstop_FIR” input in the design file. This parameter is not needed most of the time, so you can leave it as 0. However, if you do want to play around with it, you can enter a number there. For more information about “dBstop_FIR”, please refer to “Some Notes About dBstop_FIR” at the end of this page.


Fcutoff (Analog): This is a new parameter in frequency specifications. It specifies the cutoff frequency of the analog Butterworth filters. By default, this parameter is calculated for you by the App according to the Fpass and Fstop you entered, so you can leave it as it is. However, if you do want to play around with it, you can enter a number there.


Use Internal FIR: Due to the constraint on power consumption, some users may not want to use the FIR filter on AD936x. Instead, they want to move the FIR filter implementation on FPGA or some other processors. The Filter Design Wizard can also accommodate this requirement. If you decide not to use the AD936x FIR, you can turn off the “Use Internal FIR” option, as shown in the figure below, and click “Design Filter”. In this case, there is no longer any constraint on the number of the FIR taps, so the design file conducts a minimum order design. Comparing the FIR Taps in the Results portion, it is decreased from 128 to 105 if the AD936x FIR is not used.


Generate HDL: Following the previous step, if you decide to have the FIR filter implemented on FPGA, the design wizard can help you generate the HDL code. By clicking “Generate HDL”, the 'fdhdltool' function (http://www. mathworks. com/help/hdlfilter/fdhdltool. html ) is called and the Generate HDL dialog box will pop up, as shown in the figure below. There are quite a few options you can choose concerning how you would like the HDL to be generated. In the end, by clicking “Generate”, the HDL will be generated for you.


Toolbar


The icons shown on the toolbar below provide a shortcut to some frequently used functions.


From left to right, the first four icons are related to filter parameter settings:


New Filter: It will open the drop-down list for you.


Open Filter Design Parameters: It will open a saved parameter setting and load it for you.


Save Parameters to File: It has the similar function as “Coefficients to ftr File” button.


Save Parameters to Workspace: It has the similar function as “Object to Workspace” button.


The next four icons work on the magnitude response plot shown in the GUI.


Zoom In: Click the area of the axes where you want to zoom in, or drag the cursor to draw a box around the area you want to zoom in on.


Zoom Out: Click the area of the axes where you want to zoom out, or drag the cursor to draw a box around the area you want to zoom out on.


Pan: Interactively pan the view of a plot.


Data Cursor: Enable the interactive data cursor mode.


Use MATLAB Functions


In addition to MATLAB App, users can also employ the MATLAB functions to complete the filter design. What they need to do is to launch the MATLAB functions from the MATLAB command window by properly defining the input parameters in a MATLAB structure.


In MATLAB command window, the command is:


Please note this method is suitable for those users who have a clear idea about the parameter settings. For those who are not sure about the parameters, the MATLAB App is a better way to start with.


Transmit


According to AD9361 Filter Guide, the TX signal path is as following:


The digital and analog paths are separated by DAC. Before DAC, there are four digital filters. The first one (PROG TX FIR) is a programmable poly-phase FIR filter, which can interpolate by a factor of 1, 2, or 4, or it can be bypassed if not needed. The others (HB1, HB2, HB3 and INT3) are all digital filters with fixed coefficients, and they can be turned on or turned off. After DAC, there are two low-pass analog filters.


Inputs and Outputs


According to the design requirements, the inputs and outputs of the MATLAB function are as following:


Inputs


Fin = Input sample data rate (in Hz)


FIR_interp = FIR interpolation factor


HB_interp = half band filters interpolation factor


DAC_mult = ADC to DAC ratio


PLL _mult = PLL multiplication


Fpass = passband frequency (in Hz)


Fstop = stopband frequency (in Hz)


dBripple = max ripple allowed in passband (in dB )


dBstop = min attenuation in stopband (in dB )


dBstop_FIR = min rejection that TFIR is required to have (in dB )


phEQ = Phase Equalization on (not -1)/off (-1)


int_FIR = Use AD9361 FIR on (1)/off (0)


wnom = analog cutoff frequency (in Hz)


Outputs


tfirtaps = fixed point coefficients for TFIR


txFilters = system object for visualization (PROG FIR + HBs, analog filters not included)


Receive


According to AD9361 Filter Guide, the RX signal path is as following:


The analog and digital paths are separated by ADC. Before ADC, there are two low-pass analog filters. After ADC, there are three digital filters with fixed coefficients (HB3/DEC3, HB2, HB1) followed by a programmable poly-phase FIR filter (PROG RX FIR). The FIR filter can be decimated by a factor of 1, 2, or 4, or it can be bypassed if not needed.


Inputs and Outputs


According to the design requirements, the inputs and outputs of the MATLAB function are as following:


Inputs


Fout = Output sample data rate (in Hz)


FIR_interp = FIR decimation factor


HB_interp = half band filters decimation factor


PLL _mult = PLL multiplication


Fpass = passband frequency (in Hz)


Fstop = stopband frequency (in Hz)


dBripple = max ripple allowed in passband (in dB )


dBstop = min attenuation in stopband (in dB )


dBstop_FIR = min rejection that TFIR is required to have (in dB )


phEQ = Phase Equalization on (not -1)/off (-1)


int_FIR = Use AD9361 FIR on (1)/off (0)


wnom = analog cutoff frequency (in Hz)


Outputs


rfirtaps = fixed point coefficients for RFIR


rxFilters = system object for visualization (HBs + PROG FIR, analog filters not included)


Example: Tx LTE-5


In this section, we present the results for LTE-5 transmit signal path by using the MATLAB function. The input parameters are as following:


Therefore, in MATLAB command window, the command is:


After this command is executed, in the command window, you will see the two output parameters:


tfirtaps: 128-tap FIR coefficients


txFilters: object of the transmit filter chain (digital part)


We can observe the independent filter, as well as the composite response by specifying the stage of the object. Por ejemplo,


If you are interested in the filter response of HB1, you can proceed to apply the fvtool on HB1,


and you will get:


Key Steps in Design


In this section, we will talk about the key steps in AD9361 filter design. Referring to this section, you will have a better understanding of the MATLAB design files. Later on, if you would like to implement your own design algorithm, you can edit the design files to incorporate your changes.


The AD9361 filter design file can be found here:


Based on the structure of Tx and Rx filters, in the design process, we first need to determine which half band digital filters should be included. We then design the programmable FIR filter and get its coefficients. In the end, we complete the design and return the whole filter chain in an object. Since both Tx and Rx designs follow a similar workflow, the following steps take the Tx side for example.


Define Filters


Define Analog Filters


For the analog part, there is a third-order Butterworth low-pass filter and a single-pole low-pass filter on the Tx side. Both of them can be easily defined by the MATLAB function butter 3).


Define Half Band Filters


The digital filters with fixed coefficients can be easily defined by referring to the AD9361 filter guide. Since they are interpolation filters on transmit path, the coefficients declaration is followed by the mfilt. firinterp 4) function.


Take HB1 for example, the full-scale range for this filter is 2^13, and it has an interpolation factor of 2, so its coefficients are scaled by 2^(-14).


If your MATLAB license includes Fixed-Point Designer, the Hm1 object can be further defined in a fixed point format, which is a better representation of the real hardware:


Determine Half-band Filters


Since there are 4 digital half-band filters on the TX signal path, there are a finite number of interpolations they can provide. Therefore, the digital half-band filters are picked up according to the overall interpolation factor required by the user.


Design TFIR


Ideally, when the whole filter chain is completed, it will have flat response of magnitude 1 on passband, and magnitude 0 on stopband, call it . Since in the previous step, we have already picked up the digital filters, we can get the filter response without TFIR, call it . Therefore, the required response of TFIR is:


On passband, the required response rg and the weight w is:


For the analog filters, unlike digital filter, there is no “cascade” function to combine them and quickly calculate the composite response, so we made a helper function analogresp to calculate the overall response of the two analog filters and the converter. On the Tx side, the DAC is represented by a sinc function. While on the Rx side, the ADC is represented by a sinc ^3 function.


On stopband, the required response rg = 0 and the weight w is:


One other constraint about TFIR is the number of filter taps. In the AD9361 Filter Guide, it says “the number of taps is configurable between a minimum of 16 taps and a maximum of 128 taps in groups of 16”. Therefore, the following piece of code calculates the tap number N .


Given the tap number N . the required response on passband and stopband ( A1 and A2 ), as well as the corresponding weights ( W1 and W2 ), we can now use fdesign. arbmag 5) function to design the TFIR filter. In the following piece, B =2, which means there are two bands in the design.


The design of the TFIR is saved in the system object Hmd . In order to get the 16-bit filter coefficients, the following line is used:


Visualization


fvtool opens FVTool and displays the magnitude response of the digital filter defined with the system object. Using FVTool you can display the phase response, group delay, impulse response, step response, pole-zero plot, and coefficients of the filter.


For example, the following piece of code use fvtool to display the TFIR filter we just designed in the previous step. Hmd is the corresponding system object:


Some Notes About dBstop_FIR


The “dBstop_FIR” variable insures a ceiling where no matter how much rejection comes from external filters, the FIR filter is required to have a minimum rejection. To understand the reason for this, imagine that at some frequency we need 60dB of rejection and we have an external filter that gives us 75dB of rejection. If the FIR filter gave us 15dB of gain at that frequency, we would be meet the frequency response. However having gain in a stop band would cause the filter to resonate strongly at that frequency which would result in time domain problems such as very large coefficients and over-ranging of signals at that frequency. “dBstop_FIR” limits this concern.


Picking up a proper dBstop_FIR value is a very important step in designing the filter. Since it determines the weight values on stopband, different dBstop_FIR will result in very different filter responses. It is suggested to try different dBstop_FIR values and observe the time-domain coefficients (it is desired to have smooth coefficients) and frequency-domain responses (passband ripple & stopband attenuation) until you pick up a the one which shows the best combination of everything.


Generally speaking, dBstop_FIR plays a more important role in narrow bandwidth filter design than in wide bandwidth filter design. It can even be omitted when designing a filter with wide bandwidth.


Apoyo


If you have any questions about these scripts/tools, please ask on the EngineerZone. Help & Support.


Problems with the Simple Moving Average


The simple moving average of a security is a basic arithmetic measure of the change in its price over time. This average is calculated by adding up the closing price of a security for each day in a given period and then dividing the sum by the number of days. There is no special weight given to any particular day. The moving average can be calculated in a short - or long-term cycle, and the result is a measure of the average price of a security for that period. Since the formula is so basic, it often fails to give key information on price trends with the security.


Short-Term vs Long-Term Average


Simple moving average is often used to discover an uptrend in stock pricing. For any given security, an analyst can find a short-term and a long-term moving average. For example, a security's short-term average over the past month may be $4 per share. The long-term average over twelve months may be $3.50 per share. This indicator could show the security is experiencing a short-term lift in prices. The analyst must then decide whether the security will fall back below the average or break a previously imposed price ceiling. Depending on other factors, the result of this analysis could lead an analyst to recommend buying or selling the security. However, used alone, the simple moving average could not show an analyst whether a security is briefly on an uptrend or actually breaking through to a higher ceiling.


Weighted Average vs Simple Average


Perhaps the biggest downside of simple moving average is the way it imposes the same weight to each day in the price cycle being considered. This can be compared to a teacher who uses simple grading as opposed to grading on a trend. If a student performs very well in the first half of a semester and then fails three tests toward the end of a semester, the simple average for this student's grade may still be a "B." However, if the student would like an indication of where his or her grade may head next semester, it would be important to note the way the grade dropped off. Weighting the test scores to give more importance to the end of the semester's grades, the teacher may actually give the student a "C" grade.


The same model can be used with security price to indicate which direction it will head in the immediate future. For example, over the past twelve months, a security has a simple moving average of $4 per share; however, in the past 10 days, the average is $4.25 per share. If more weight is put on to this past 10 days using an exponential moving average, the average may total out to $4.05 per share or $4.10 per share. Another security also has a twelve-month simple average of $4 per share; however, in the past 10 days, the average is $3.50 per share. In this case, the first security would be experiencing the uptrend. An exponential moving average would show this.


CheapHLOCPlot


DB Backup Tool (Umar) Using this script I can download backup files and.


Inventory Management 1.00 (Fredric) Nice one, easily installed. Stable enough to manage your.


Web Voice Chat 1.2 (Shadath) This is very helpful script for me. So far it has been.


Alert once message script 1 (Noah) It's working fine but I am wondering how can I show.


SoftEcartJS 1.1 (Lawangeen) This is a better shopping cart among the all I've tried.


Smithers Login 0.4 (Jake) Great looking script. Think developers did a great deal.


DB Backup Tool (Faustino) I configured it on my server easily. So far it's working.


PHP Review Script 1.0 (Ned) It looks extremely professional! Keep up the good work!


RosarioSIS (Kate) Simple but powerful features. Muy fácil de usar. Good job.


Post Comment Script 1.2 (Carina) Thanks for this extremely useful script. Will certainly.


Now there are 69 users online at Script Library


The ARIMA Procedure


The ARIMA procedure provides the identification, parameter estimation, and forecasting of autoregressive integrated moving average (Box-Jenkins) models, seasonal ARIMA models, transfer function models, and intervention models.


The ARIMA procedure offers complete ARIMA (Box-Jenkins) modeling with no limits on the order of autoregressive or moving average processes. Estimation can be done by exact maximum likelihood, conditional least squares, or unconditional least squares. In addition you can model intervention models, regression models with ARMA errors, transfer function models with fully general rational transfer functions, and seasonal ARIMA models. PROC ARIMA's model identification diagnostics include plots of autocorrelation, partial autocorrelation, inverse autocorrelation, and cross-correlation functions.


PROC ARIMA also allows tentative autoregressive moving average (ARMA) order identification based on smallest canonical correlation, extended sample autocorrelation function, or information criterion analysis. ARIMA model-based interpolation of missing values is permitted. Forecasting is tied to parameter estimation methods. Finite memory forecasts are used for models estimated by maximum likelihood or exact nonlinear least squares, while infinite memory forecasts are used for models estimated by conditional least squares.


The ARIMA procedure offers a variety of model diagnostic statistics, including


Akaike's information criterion (AIC)


Schwarz's Bayesian criterion (SBC or BIC)


Ljung-Box chi-square test statistics for white noise residuals


stationarity tests, including Augmented Dickey-Fuller (including seasonal unit root testing), Phillips-Perron, and random-walk with drift tests


The %DFTEST macro performs Dickey-Fuller tests for simple unit roots or seasonal unit roots in a time series. The %DFTEST macro is useful to test for stationarity and determine the order of differencing needed for the ARIMA modeling of a time series.


For further details, see the SAS/ETS ® User's Guide: The ARIMA Procedure . (PDF | HTML )


Examples


matlab inverse cumulative function or chp or bryce or naturist or noreen or inspire or x86 or categoria or bendita or diagnostic


download august update sharepoint 2010


first grade math test


Although she was sure he said something, she didn t hear what it was and was soon dreaming.


review iv level e


cumulative preferred stock dividends accounting


backup incremental level 1 cumulative copies 1


do cumulative line graph excel cumulative sentence examples in literature


knowledge about advantage


He reached into his coat pocket and reassuringly gripped the cross he had brought as a precaution, reminding himself why he was here in the first place. The doctor said she d broken it pretty bad and was surprised she was able to limp out on to the road like she said. system center operations manager update 4


He turned away from her and downed the contents in his glass.


legal definition cumulative voting


scom 2007 r2 update 6


frequency curve generator


microsoft communicator 2007 r2 cumulative update


excel 2007 cumulative line graph


security update internet explorer 8 windows vista


cumulative gpa calculator for high school


sql server 2008 r2 sp1 cumulative update 4


chapter 10 review answers


cumulative distribution table statistics


find the z-score that corresponds to the area


cumulative frequency distribution graph


sql sp3 cumulative update 5 It was just a small 288 Immortal Promise: A Vampire Love Story room with a fridge, stove, a sink and a few cupboards for the students that couldn t afford to pay for meals all the time and preferred to cook their own. Michael swung his head around just to face Lucas, who d just stepped out of the shadows.


frequency curve example


z score table preferred cumulative stock dividends


cumulative electrolytes serum blood test


non cumulative preferred shares definition


security update for internet explorer 8 vista


lloyds 9.75 non cumulative preference shares


cumulative z score table


draw frequency distribution curve


math definition cumulative frequency


calculating cumulative gpa 4.0 scale


cumulative incidence vs incidence density rate


comparing box plots cumulative frequency


update package 2 biztalk adapter pack 2.0


calculate cumulative gpa 4.0 scale


distribution function of standard normal distribution


perpetual preferred stock definition


cumulative net investment income


interest calculation formula excel


plot histogram matlab


access sum report calculate cumulative hours excel probabilities of the standard normal distribution function table


how do you calculate your gpa in college


n x standard normal cumulative distribution function


excel cumulative column chart


cumulative normal distribution excel function


What if we do something to screw up future events?


inverse standard normal cumulative distribution sas


cumulative fatigue damage mechanisms quantifying parameters


average growth rate excel


how do i calculate gpa in two colleges


normal distribution excel


calculating cumulative frequency in excel


cumulative effect accounting principle


ceq nepa cumulative effects


cumulative frequency histogram quartiles


cumulative percentage pivot table excel 2007


infant mortality incidence


baylor gpa calculator office web apps cumulative updates


It was rough to keep explaining her life to these women.


how do you do a bar chart on excel


circular and causation theory


cumulative logit model wiki


normdist in excel cumulative


calculate unweighted gpa cumulative gpa


cumulative current account balance map


cumulative illness rating scale geriatrics pdf


gcse maths cumulative frequency


cumulative frequency histogram generator


ms10-002 security update


cumulative voting the value of minority shareholder voting rights


cumulative earned value project management


grade 4 math review


update package 6 for sql server 2008 r2 download


first grade cumulative math test


relative frequency distribution example use distribution function ti 89


sap delivery quantity plot percentage excel


sql server 2005 sp4 update download


parametric regression on cumulative incidence function


normdist in excel cumulative


backup incremental level 1 cumulative database


update package 8 for sql server 2008 r2


grade point average cbse


internet explorer 8 cumulative update for windows xp cumulative ack go back n


Once he paid for their train tickets, they sat on a wooden bench and waited for the train to arrive.


statutory vs voting


windows xp cumulative update


cumulative grade point average definition


calculate cumulative moving average excel


sharepoint april 2009 update download


non-cumulative preferred shares accounting


what does offenses mean


sentence definition examples


v6r1 cumulative ptf package


preference shares companies act frequency graph interquartile range


the validity questions are cumulative


His topaz eyes flashed luminescent and narrowed on Ted Williams as the young man put his hands on his master s desire. The patrons immediately stepped out of the way and bowed at the sight of the group.


cumulative security update for internet explorer 8 for windows vista


cumulative average-time learning model wiki


vs straight voting example


proc freq output dataset cumulative percent


distribution function excel chart


cumulative distribution function of x 2


average-time learning model+example


the continuity model'


calculate probability mass function distribution function the cumulative area for z 0 is 0.5000


frequency analysis matlab


mortality rate calculation


empirical cumulative distribution function wiki


effect of cumulative non participating preferred stock on dividends


difference between point prevalence and cumulative incidence


update package 4 for sql server 2008 service pack 3


examples loose sentence


distribution function skew normal


distribution table black-scholes


joint distribution function example


cumulative profit formula excel


cumulative incidence plot sas


cumulative participating preferred stock definition


They were savages, Lady Chelmsford declared from her seat at the table, flapping her hand at her face as though she felt overcome.


cumulative volume delta indicator


cumulative upkeep rules magic gathering


voting the value of minority shareholder voting rights


how to calculate cumulative frequency in excel 2007


risk intervention model plot the normal distribution in excel


cumulative frequency polygon in statistics


binomial distribution table n 50


bivariate normal distribution matlab


average returns abnormal returns


update 6 for sql server 2008 r2 sp1


But the past 201 Lietha Wards couple of days have caught me off guard too, so I m not surprised.


cdf cumulative distribution function examples


distribution function example statistics


cumulative frequency graph generator


She looked up the wall blanketed with vines to a window. While you were sitting on your butts, they were slaving away to make your meals.


cumulative 4th grade math test or combines or rit or staph or faxing or inductive or aoc or grasshoppers or maybach or rawhide or haile or romanesti or lasting or ghia or workshops or playground or spice or c4280 gpa calculated 4.0 scale


non-cumulative key figure example He didn t care for that one too much, and knew she would be trouble. Ted and Linda wouldn t have been suspicious then and Elsa 325 Lietha Wards wouldn t know.


how to calculate your gpa for college


august cumulative update sharepoint 2010


internet explorer 8 update windows 7


wealth index inflation adjusted


ms10-002 security update


The sooner he left this homestead, the better both he and Penelope would be.


frequency histogram matlab


frequency questions gcse


logarithmic cumulative distribution function


barclays 6 non preference shares


When he continued to hold her hand, she offered no resistance.


cumulative table of contents family law


vocab answers level d cumulative review


sql server 2008 update 3


frequency distribution relative


cumulative frequency polygon histogram voting board of directors earned value calculation


sql sp3 update 5


cumulative security update for internet explorer 8 for windows server 2008


cumulative update package 9 sql 2008


empirical joint cumulative distribution function


cumulative trauma disorder wiki


increase total shareholder return find z score given area calculate distribution function excel


vocab level f review answers non dividends definition cumulative total lead time oracle


cumulative distribution function wiki


Esther hurried to her and supported her so that she didn t trip again when the awful dress wrapped itself around her ankles. He had been able to pay him back within two years, then he finally proposed.


He led Elsa up to the heavily bolted door and willed it to open.


cumulative type i error affects decision making


trauma disorder treatment


When he retrieved the chip, he d be able to go to any time he wanted and stay there for the rest of his life.


cumulative gpa calculator osu


definition majority cumulative voting rights


download august cumulative update sharepoint 2010


Copyright© F@N Communications, Inc. All Rights Reserved


EWMA Template


What is it: An EWMA (Exponentially Weighted Moving-Average) Chart is a control chart for variables data (data that is both quantitative and continuous in measurement, such as a measured dimension or time). The chart plots weighted moving average values, a weighting factor is chosen by the user to determine how older data points affect the mean value compared to more recent ones. Because the EWMA Chart uses information from all samples, it detects much smaller process shifts than a normal control chart would. As with other control charts, EWMA charts are used to monitor processes over time.


Why use it: Applies weighting factors which decrease exponentially. The weighting for each older data point decreases exponentially, giving much more importance to recent observations while still not discarding older observations entirely. The degree of weighing decrease is expressed as a constant smoothing factor О±, a number between 0 and 1. О± may be expressed as a percentage, so a smoothing factor of 10% is equivalent to О±=0.1. Alternatively, О± may be expressed in terms of N time periods, where. For example, N=19 is equivalent to О±=0.1.


The observation at a time period t is designated Yt, and the value of the EMA at any time period t is designated St. S1 is undefined. S2 may be initialized in a number of different ways, most commonly by setting S2 to Y1, though other techniques exist, such as setting S2 to an average of the first 4 or 5 observations. The prominence of the S2 initialization's effect on the resultant moving average depends on О±; smaller О± values make the choice of S2 relatively more important than larger О± values, since a higher О± discounts older observations faster.


The advantage of EWMA charts is that each plotted point includes several observations, so you can use the Central Limit Theorem to say that the average of the points (or the moving average in this case) is normally distributed and the control limits are clearly defined.


Where to use it: The charts' x-axes are time based, so that the charts show a history of the process. For this reason, you must have data that is time-ordered; that is, entered in the sequence from which it was generated. If this is not the case, then trends or shifts in the process may not be detected, but instead attributed to random (common cause) variation.


When to use it: EWMA (or Exponentially Weighted Moving Average) Charts are generally used for detecting small shifts in the process mean. They will detect shifts of .5 sigma to 2 sigma much faster than Shewhart charts with the same sample size. They are, however, slower in detecting large shifts in the process mean. In addition, typical run tests cannot be used because of the inherent dependence of data points. EWMA Charts may also be preferred when the subgroups are of size n=1. In this case, an alternative chart might be the Individual X Chart. in which case you would need to estimate the distribution of the process in order to define its expected boundaries with control limits.


When choosing the value of lambda used for weighting, it is recommended to use small values (such as 0.2) to detect small shifts, and larger values (between 0.2 and 0.4) for larger shifts. An EWMA Chart with lambda = 1.0 is an X-bar Chart. EWMA charts are also used to smooth the affect of known, uncontrollable noise in the data. Many accounting processes and chemical processes fit into this categorization. For example, while day to day fluctuations in accounting processes may be large, they are not purely indicative of process instability. The choice of lambda can be determined to make the chart more or less sensitive to these daily fluctuations.


How to use it: Interpreting an EWMA Chart Standard Case (Non-wandering Mean) Always look at Range chart first. The control limits on the EWMA chart are derived from the average Range (or Moving Range, if n=1), so if the Range chart is out of control, then the control limits on the EWMA chart are meaningless On the Range chart, look for out of control points. If there are any, then the special causes must be eliminated. Remember that the Range is the estimate of the variation within a subgroup, so look for process elements that would increase variation between the data in a subgroup.


After reviewing the Range chart, interpret the points on the EWMA chart relative to the control limits. Run Tests are never applied to a EWMA chart, since the plotted points are inherently dependent, containing common points. Never consider the points on the EWMA chart relative to specifications, since the observations from the process vary much more than the Exponentially Weighted Moving Averages. If the process shows control relative to the statistical limits for a sufficient period of time (long enough to see all potential special causes), then we can analyze its capability relative to requirements. Capability is only meaningful when the process is stable, since we cannot predict the outcome of an unstable process.


Wandering Mean Chart Look for out of control points. These represent a shift in the expected course of the process, relative to its past behavior. The chart is not very sensitive to subtle changes in a drifting process, since it accepts some level of drift as being the nature of the process. Remember that the control limits are based on an exponentially smoothed prediction error for past observations, so the larger the prior drifts, the more insensitive the chart will be to detecting changes in the amount of drift.


Stochastic Oscillator


The Stochastic Oscillator Technical Indicator compares where a security’s price closed relative to its price range over a given time period. The Stochastic Oscillator is displayed as two lines. The main line is called %K. The second line, called %D, is a Moving Average of %K. The %K line is usually displayed as a solid line and the %D line is usually displayed as a dotted line.


There are several ways to interpret a Stochastic Oscillator. Three popular methods include:


Buy when the Oscillator (either %K or %D) falls below a specific level (e. g. 20) and then rises above that level. Sell when the Oscillator rises above a specific level (e. g. 80) and then falls below that level;


Buy when the %K line rises above the %D line and sell when the %K line falls below the %D line;


Look for divergences. For instance: where prices are making a series of new highs and the Stochastic Oscillator is failing to surpass its previous highs.


Cálculo:


The Stochastic Oscillator has four variables:


%K periods. This is the number of time periods used in the stochastic calculation;


%K Slowing Periods. This value controls the internal smoothing of %K. A value of 1 is considered a fast stochastic; a value of 3 is considered a slow stochastic;


%D periods. his is the number of time periods used when calculating a moving average of %K;


%D method. The method (i. e. Exponential, Simple, Smoothed, or Weighted) that is used to calculate %D.


The formula for %K is:


Where: CLOSE — is today’s closing price; LOW(%K) — is the lowest low in %K periods; HIGH(%K) — is the highest high in %K periods.


The %D moving average is calculated according to the formula:


Source Code


Full MQL4 source of Stochastic Oscillator is available in the Code Base: Stochastic Oscillator


Warning: All rights on these materials are reserved by MetaQuotes Software Corp. Copying or reprinting of these materials in whole or in part is prohibited.


Moving Averages, Historical Examples The top chart below show an example of how moving averages, when confirmed by price action, can signal trading opportunities.


In second chart we see moving averages applied to the AUD/NZD currency pair (although examples for this are easily found with all pairs). Notice the Three Outside Up pattern that penetrates the 20-moving average (Black Line) at the same time the 50-Day SMA (Yellow) crosses over the 200-Day SMA(Green). This reversal pattern. and the fact that price bounces off of the 200 moving average, shows that the downside momentum is lost, and signals that a rally may follow.


Here we see a classic sequence of candlestick patterns combine with moving average signals.


Related Words


Risk Disclaimer: We do not guarantee the completeness or accuracy of any information presented on this site. You should do your own independent research before making any decisions.


©2007 Forex Trading, LLC, all rights reserved Terms of Use, Copyright and Disclaimer


Matlab Filter Implementation


In this section, we will implement (in matlab) the simplest lowpass filter


(from Eq. (1.1 )). For the simplest lowpass filter. we had two program listings:


Fig.1.3 listed simplp for filtering one block of data, and


Fig.1.4 listed a main program for testing simplp .


In matlab, there is a built-in function called filter 3.3 which will implement simplp as a special case. The syntax is where x is the input signal (a vector of any length), y is the output signal (returned equal in length to x ), A is a vector of filter feedback coefficients . and B is a vector of filter feedforward coefficients . The filter function performs the following iteration over the elements of x to implement any causal. finite-order. linear, time-invariant digital filter : 3.4


Note that Eq. (2.1 ) could be written directly in matlab using two for loops (as shown in Fig.3.2 ). However, this would execute much slower because the matlab language is interpreted . while built-in functions such as filter are pre-compiled C modules. As a general rule, matlab programs should avoid iterating over individual samples whenever possible. Instead, whole signal vectors should be processed using expressions involving vectors and matrices. In other words, algorithms should be ``vectorized'' as much as possible. Accordingly, to get the most out of matlab, it is necessary to know some linear algebra [58 ].


The simplest lowpass filter of Eq. (1.1 ) is nonrecursive (no feedback), so the feedback coefficient vector A is set to 1. 3.5 Recursive filters will be introduced later in §5.1. The minus sign in Eq. (2.1 ) will make sense after we study filter transfer functions in Chapter 6.


The feedforward coefficients needed for the simplest lowpass filter are


With these settings, the filter function implements


Figure 2.1: Main matlab program for implementing the simplest low-pass filter .


Pregunta


9.2 For a seven-term moving average filter, write an expression for the


a. Difference equation


segundo. Impulse response


do. Transfer function


re. Frequency response


9.3a. Sketch the frequency response for a seven-term moving average filter using dB for magnitude and degree for phase. Use digital frequency steps of pi/8 radians or smaller.


segundo. With references to the frequency response sketch, explain why this filter can guarantee that no pass band phase distortion will occur.


Respuestas


Algorithm Links


This page contains links to sites that contain useful software for biomedical signal processing. I don't endorse any of them in particular, but it might help you if you are looking to avoid re-inventing the wheel.


Physionet - open source ECG code and databases with papers and links.


Numerical Recipes in C - A collection of well written C (and Fortran) algorithms with accompanying explanations and advice on use (plus example wrapper code to implement each one).


Oxford's PARG. a great collection of Matlab toolboxes for ICA, ARMA modeling, Stats, HMM and other things


Steve Robert's collection of Matlab code and toolboxes for everything.


Kalman Filtering toolbox for Matlab by Kevin Murphy + all the links you'll need.


Netlab - the classic neural network and related tools. Matlab code form Ian Nabney. Get his book if you can.


Gatsby unit at UCL. machine learning algos in Matlab. Also see Zoubin Ghahramani


Barry Quin's Time Series Analysis. Sig. Proc. & Stat. Inf. links (papers, book and Matlab code).


DSPGuru - A collection of Open source software, tutorials and links to DSP apps, hardware and tricks.


Matlab File Exchange - a collection of free algos posted by random (people with ratings).


Google Groups - DSP A beta usenet group form those multi-zeroed wonders.


Peter K Peter Kootsookos's frequency estimation algos.


Journals in which you might want to publish or just use.


http://www. physionet. org. Extensive selection of open source C & Matlab code, papers and data on ECGs http://www. librasch. org/librasch/index. html. Extensive set of libraries for reading biomedical signal formats http://www. robots. ox. ac. uk/


parg/software. html. Extensive set of statistical learning tools (HMMs, EKFs, particle filters) and associated papers http://www. ncrg. aston. ac. uk/netlab/index. php. Wonderful open source Matlab-based statistical classification toolbox http://biron. usc. edu/


jcarvalh/ecgconvert. zip. matlab tools for viewing physionet data http://www. owlnet. rice. edu/


elec301/Projects02/adaptiveFilters/code. htm. adaptive filters for ECG analysis http://granat. es. lth. se/


elektrovetenskap/biosignal/. the website associated with Lief and Pablo's new book http://. website suggested by julie?


kaplan/hrv/doc/. Open source HRV and nonlinear analysis tools for matlab


Nonlinear analysis http://www. mpipks-dresden. mpg. de/


tisean/TISEAN_2.1/docs/indexf. html. nonlinear time series analysis software in C http://www. macalester. edu/


kaplan/software/ nonlinear time series analysis software in Matlab www. mcsharry. net


Nonstationary sig proc tools Non-stationary signal processing tools in Matlab.


ICA - http://www. cis. hut. fi/projects/ica/fastica/ http://www. robots. ox. ac. uk/


parg/projects/ica/riz/code. html variational bayes ICA http://www. tsi. enst. fr/icacentral/algos. html http://www. tech. plym. ac. uk/spmc/ica/ica_tools. html http://www. uvic. es/eps/recerca/processament/demostracions/Separation%20Sources/download. html http://www. cnl. salk. edu/


fbach/kernel-ica/index. htm http://www. bsp. brain. riken. go. jp/ICALAB/ http://www. sccn. ucsd. edu/


pc1jvs/bookmatlabcode/bookmatlabcode. html http://www. inference. phy. cam. ac. uk/mackay/Software. html http://www. inference. phy. cam. ac. uk/mackay/BayesICA. html http://mole. imm. dtu. dk/toolbox/ http://www. aims. ac. za/


mackay/itila/p0.html code and book download http://www. robots. ox. ac. uk/


cardoso/stuff. html http://www. dcs. ex. ac. uk/ica/icapp/contributors. html. key contributors to ICA (& book) http://www. cis. hut. fi/projects/ica/book/ ICA book http://www. lis. inpg. fr/ress_humaines/personnes/dsppersonne. php? submit=1&level=1&numpersonne=70 nonlinear ICA Christian Jutten http://www. tech. plym. ac. uk/spmc/ica/ica_matlab. html http://www. tsi. enst. fr/icacentral/ ICA central http://www. robots. ox. ac. uk/


Machine learning and pattern recognition/neural networks Gatsby unit at UCL. machine learning algos in Matlab. Also see Zoubin Ghahramani Netlab. Classic neural network and related tools. Matlab code form Ian Nabney. Get his book if you can. http://www. robots. ox. ac. uk/


Adaptive Filtering: http://www. owlnet. rice. edu/


Kalman filtering http://www. iau. dtu. dk/research/control/kalmtool. html KALMTOOL http://www. robots. ox. ac. uk/


sjrob/Outgoing/software. html#kalman SJROB's bayesian KF Kalman Filtering toolbox for Matlab by Kevin Murphy + all the links you'll need.


parg/pmbmi. html book on Probabilistic Modelling in Bioinformatics and Medical Informatics by steve roberts et al http://www. robots. ox. ac. uk/


sjrob/Outgoing/software. html http://www. mathworks. com/products/statistics/functionlist. html http://www-stat. stanford. edu/


susan/courses/b494/index/node77.html bootstrap stats http://www. macalester. edu/%7Ekaplan/Resampling/Matlab/index. html


Scientific libraries: http://www. gnu. org/software/gsl/ GNU GSL Numerical Recipes in C - A collection of well written C (and Fortran) algorithms with accompanying explanations and advice on use (plus example wrapper code to implement each one).


Data libs http://www. librasch. org/librasch/index. html. Extensive set of libraries for reading biomedical signal formats http://www. physionet. org. Extensive selection of open source C & Matlab code, papers and data on ECGs


Data sources http://www. physionet. org. Extensive selection of open source C & Matlab code, papers and data on ECGs


Hardware: DSPGuru - A collection of Open source software, tutorials and links to DSP apps, hardware and tricks.


Medical/Biological based sites: http://ocw. mit. edu/OcwWeb/Health-Sciences-and-Technology/HST-542JSpring-2004/CourseHome/index. htm. Prof. Marks cardiovascualr physiology site on open courseware at MIT http://medstat. med. utah. edu/kw/ecg/ecg_outline/index. html http://medstat. med. utah. edu/kw/ecg/ACC_AHA. html ECG diagnosis


Open source (and relevant) Journals/Confereces www. biomedical-engineering-online IEEE TBME www. cinc. org Computers in Cardiology


Bar Chart with Average Line


A Bar Chart is used to represent data using horizontal bars. One way in which you can augment a bar chart is to add an average line.


Create a bar chart


Create a bar chart in Excel using any particular data set at your disposal. In our case, we took the Forbes list of the richest people on earth (2009 figures). Here’s how the data looks like.


To create a bar chart, select the data range shaded in grey and insert a bar chart. The output of this step would look something like this. Please note that we have eliminated the chartjunk from this one and therefore this looks a bit better than the default chart Excel would’ve produced.


Add a new series to the bar chart


Now to adding the average line. If you noticed, we added another row in our dataset that shows the average of the records in the original data set. (There are a number of way in which an average line can be added to a chart. The below highlights one such method.) Add a new data series to chart and specify the average as the value to plot as shown below.


Convert new series to a XY Chart


Once you’ve added another series to the chart, select it and then change the chart type to XY chart.


Specify X-Y coordinates of the point in the combination bar chart


To properly align the point with the right X-Y coordinates, we edit the data source and change the X and Y value of the point. We set the Y value to 0 so that it aligns with the bottom axis. We set the X axis to the average.


Add Y Error bars to the series


To add the average line that cuts across the horizontal bars, we double-click the point and select the tab labelled “Y Error Bars”. In the error amount, enter the appropriate values. In our case we fill the custom values of +ve and - ve with the average.


Format Y Error Bars and the chart


At this point we carry out the following steps:


Edit the vertical axis Scale so that is has minimum = 0 and maximum = 1. Once set, you can then go ahead and delete the scale using the delete key.


Delete the top horizontal (secondary) axis.


Format the Y error line so that the marker shape is change to that of a rectangle with a solid fill


Format the Y error line so that the it is changed to a dotted line with a lighter shade that goes well with the overall chart.


Format the chart to that the bottom (primary) axis scale is visible


Here is a snapshot of the bar chart as it would look like as you carry out the above steps. The average line that cuts across the bars, would move as the chart data gets modified.


And here’s the final bar chart. For the stickers for convention, you can arrange the data series in the descending order. You can even add a label the average line which lets the reader know where the average lies. (averages lie, don’t they !)


7 Comments


Average Rates of Change


Does the ball fall at a constant rate?


Objective . The goal of this demo is to provide students with a concrete understanding of the average rate of change for physical situations and for functions described in tabular or graphic form.


Level: Precalculus and calculus courses in high school or college.


Prerequisites: Familiarity with the concept of slope of a line and computing the slope of a line.


Platform: No particular software package is required. Support for a viewer of gif or mov files is required. Viewers within a browser, Windows media player, QuickTime, or a commercial program can be used. It is recommended that a viewer that contains a stop/start feature be used when incorporating the animation in a lecture format or when students view the animation on an individual basis. A set of interactive Excel demos that use graphs of functions is included.


Instructor's Notes: In mathematics average rate of change is a stepping stone to instantaneous rates of change and the fundamental concept of a limit. Thus it is important to provide a variety of learning experiences about average rates of change so that students understand this fundamental notion of change. This demo provides visual experiences which connect to the algebraic expressions for average rates of change.


Average Rates for Objects in Motion: To provide a focus, we start with two common visualizations; a falling ball and a moving vehicle. Figure 1 shows an animation of a ball falling from rest under the influence of gravity, while Figure 2 displays a car traveling along a straight track with constant acceleration. (Click here to download a zipped file containing the animations in Figures 1 and 2 in both gif and QuickTime formats.)


Average rate of change is often introduced by saying it is the change in distance over the change in time:


Let s denote distance and t denote time, then we use the symbols for change in distance and for change in time. Thus we have


In situations involving the motion of an object like in Figures 1 and 2 we use the terminology average velocity in place of average rate of change. In such cases we denote average velocity by and we have


In Figure 1 we can compute the average velocity of the falling ball between two marks on the ruler displayed beside the path of the ball. For instance, we can measure the time it takes the ball to drop from the top (0 meter mark) to the 3 meter mark. In this case we have


and so the average velocity of the ball from s = 0 to s = 3 is


(The symbol means approximately equal to, since your calculator will display more than two decimal digits when you compute the ratio of 3 to 0.72. Displaying several decimal places is sufficient for our work.)


Many students don't recognize that things like falling bodies, moving vehicles, rising populations, and decaying radioactive materials do not change at constant rates. For instance, the falling ball has different average velocities as it passes various meter marks. Figure 3 shows a sample of the average velocities of the ball.


A display of the average velocities as the ball passes meter marks is available as an animation. By clicking here you can download a zipped file containing both an animated gif and a QuickTime file that illustrates the different average velocities of the falling ball. We recommend that when you show it to a class that you use the QuickTime file. This will let you start and stop the animation so that you can discuss portions with your students. Figure 4 contains a segment of the animation. The full animation starts tracking the average velocities as the ball starts at the top.


A similar display of the average velocities as the car passes successive 10 meter marks is available as an animation. By clicking here you can download a zipped file containing both an animated gif and a QuickTime file that illustrates the different average velocities of the car. A preview of this animation appears in Figure 5.


A good classroom activity for students is to have them construct a table of distances covered, elapsed times, and average velocities of the falling ball animation and/or the moving car animation. One suggestion is first show the animation discussing it as it progresses. Using the QuickTime file lets you start and stop the animation. After the initial discussion show the animation a second time during which students construct a table as shown in Figure 6. A brief discussion of these tables reinforces the idea that the average velocity of the objects in these demos changes.


To provide computational experience for calculating average velocities, imagine that the ball was falling on the moon or on mars. Basically the animation remains the same, but the times to fall from the top to a meter mark will vary. Tables for each of these scenarios appear in Figure 7. A brief discussion of why the times and hence average velocities change is a natural link to basic physics ideas.


If you prefer to use the moving car animation we can alter its behavior by giving the car an initial velocity or changing the value of the constant acceleration. Two such cases appear in the tables shown in Figure 8. Again a brief discussion of the changes in time and average velocity lead to an easy link to familiar physical concepts.


The tables in Figures 7 and 8 can be downloaded as a pdf file by clicking here. Each table appears on a separate page for ease of duplication for class handouts.


Average Rate of Change of a Function: Each of the tables in Figures 6 - 8 is a discrete sample of a function. Next we connect average rates of change to the slope of a line segment between two points on a curve.


Figure 9a displays a plot of the time vs. distance data for the falling ball. (We have included the point (0, 0) since at time = 0, the distance traveled is s = 0.) The points shown in Figure 9a are a sample of the points along the curve shown in Figure 9b which is plot of time vs. distance for all distances along the ruler shown in Figure 1. The data points in Figure 9a are from the falling ball data in Figure 6.


In Figure 10 we display an animation that draws a line segment from the origin to each data point in Figure 9a and displays the slope of that segment. Comparing the slopes of the line segments with the average velocities for the falling ball that are displayed in Figure 6 we see that they are the same.


Click here to download a zipped file containing the animation of Figure 10 in both gif and QuickTime formats. (For class discussion we recommend using the QuickTime file so that you can start and stop the animation as you discuss the ideas.)


An animation similar to that in Figure 10 for the moving car is available. Click here to download a zipped file containing the moving car animation in both gif and QuickTime formats. (For class discussion we recommend using the QuickTime file so that you can start and stop the animation as you discuss the ideas.)


Average Rate of Change of a Function over an Interval


The average rate of change of a function y = f(x) over an interval [a, b] in its domain is defined as follows:


This is illustrated geometrically as shown in Figure 11 and we say


that the quotient D y / D x is the slope of the secant line from point (a, f(a)) to point (b, f(b)). (Note: a secant line is any line connecting two points on the same curve.) If function y = f(x) measures the distance covered as time varies from x = a to x = b, then the slope of the secant line from (a, f(a)) to (b, f(b)) is interpreted as an average velocity. This situation was illustrated for the falling ball by the animation in Figure 10.


The falling ball and moving car examples discussed above are familiar situations to most students. The data shown in Figure 6 and used in the animations for these examples is limited to intervals starting at 0 and ending at a certain meter mark. We have constructed an interactive Excel demo for each of these examples that permits computation of the average rate of change over many intervals. Figure 12 shows the screen for the falling ball. The idea is to move the sliders to calculate the average rate of change between the two points on the curve. (The curve shown was generated by creating an interpolant to the discrete data in Figure 6 which is displayed in Figure 9a.) This Excel file for the falling ball can be executed or downloaded by clicking here. For a corresponding Excel file for the moving car click here.


For a set of interactive Excel files for computing average rates of change along a curve see the auxiliary resources below.


1. We have constructed at set of five interactive Excel demos involving average rates of change. You can execute or download this collection by clicking here. These demos could be used in class by the instructor, used with groups in lab setting, or assigned as out-of-class investigations. A set of suggestions for questions that could be assigned as part of the student investigations is available by clicking here.


2. Radar Guns: At the January 2005 Joint Mathematics Meeting in Atlanta, Ga. Melvin Royer of Indiana Wesleyan University gave a talk entitled Calculus Demonstrations: Economical Radar Guns in the contributed paper session MY FAVORITE DEMO—Innovative Strategies for Mathematics Instructors organized by David R. Hill and Lila F. Roberts. in his talk he discussed a classroom demonstration that demonstrates how to use economical radar guns to measure average velocity. The economical radar gun consists of a meter stick and a stop watch. The set up is to have a ball which is rolling down an inclined plane that has a meter stick fixed along its edge and use the stop watch to get time it takes cover a particular distance. By having multiple stop watches so that students can work in teams data like that given in Figure 6 can easily be recorded. Students can then determine the average rate of change over different time intervals and get first hand experience with a situation in which the moving object does not have constant velocity. Professor Royer kindly gave us permission to include his abstract in this demo abstract; click here for a pdf file of the abstract. In his abstract he doesn't stop with average velocity and we will refer to it again in a demo on instantaneous velocity.


A similar class demo with a complete lesson plan can be found at http://www. tvgreen. com/Spectrum08/document/MotionLab. htm. The materials needed are easily obtained.


For some basic information on how actual radar guns work go to http://electronics. howstuffworks. com/radar-detector1.htm. Portions of the information give a nice description that provides a connection between math and physics.


3. The demo Escalator Motion and Average Rates of Change provides an early introduction to related rates of change using escalator motion and average rates of change.


4. Connections to other topics. The concept of average rate of change is often illustrated through examples and exercises that use applications like average velocity, average acceleration, average weight gain, average cost, and so on. These are followed by tying the average rate of change to the instantaneous rate of change by a limit process.


Credits . This demo was constructed by


David R. Hill Department of Mathematics Temple University


and is included in Demos with Positive Impact with his permission.


It is currently Thu Mar 17, 2016 9:56 am


Technical Support Topics Posts Last post


Installation and Registration For questions regarding installation and registration of EViews. Moderators: EViews Gareth. EViews Steve. EViews Jason. EViews Moderator 133 Topics 532 Posts Last post by EViews Gareth on Mon Mar 14, 2016 8:40 am


License Manager For questions regarding EViews License Manager. Moderators: EViews Gareth. EViews Steve. EViews Jason. EViews Moderator 38 Topics 134 Posts Last post by CharlieEVIEWS on Wed Nov 18, 2015 8:24 am


Data Manipulation For questions regarding the import, export and manipulation of data in EViews, including graphing and basic statistics. Moderators: EViews Gareth. EViews Steve. EViews Jason. EViews Moderator. EViews Pamela 1584 Topics 6972 Posts Last post by adrian_d on Thu Mar 17, 2016 8:43 am


Estimation For technical questions regarding estimation of single equations, systems, VARs, Factor analysis and State Space Models in EViews. General econometric questions and advice should go in the Econometric Discussions forum. Moderators: EViews Gareth. EViews Moderator 3071 Topics 12238 Posts Last post by ecardamone on Thu Mar 17, 2016 9:18 am


Programming For questions regarding programming in the EViews programming language. Moderators: EViews Gareth. EViews Jason. EViews Moderator 2615 Topics 10947 Posts Last post by johansamuelsson on Tue Mar 15, 2016 9:25 am


EViews 5 and Earlier For questions of any nature based on EViews 5 or earlier versions of EViews. Moderators: EViews Gareth. EViews Moderator 139 Topics 448 Posts Last post by Aliraza35 on Sat Jan 09, 2016 3:18 am


Tips, Tricks and Suggestions Topics Posts Last post


General Information and Tips and Tricks For requesting general information about EViews, sharing your own tips and tricks, and information on EViews training or guides. Moderators: EViews Gareth. EViews Moderator 316 Topics 1411 Posts Last post by Wecon on Wed Mar 16, 2016 6:45 am


Suggestions and Requests For making suggestions and/or requests for new features you'd like added to EViews. Moderators: EViews Gareth. EViews Moderator 383 Topics 1072 Posts Last post by johansamuelsson on Thu Mar 17, 2016 4:31 am


Bug Reports For notifying us of what you believe are bugs or errors in EViews. Please ensure your copy of EViews is up-to-date before posting. Moderators: EViews Gareth. EViews Moderator 544 Topics 2170 Posts Last post by EViews Gareth on Thu Mar 17, 2016 7:49 am


Any Other Business For posts that don't quite fit into any of the other forums, including posts about these forums themselves. Moderators: EViews Gareth. EViews Moderator 40 Topics 128 Posts Last post by khlop@ on Tue Mar 15, 2016 2:43 am


Program Repository For posting your own programs to share with others Moderators: EViews Gareth. EViews Moderator 63 Topics 433 Posts Last post by diggetybo on Tue Mar 08, 2016 12:27 am


EViews Add-ins Topics Posts Last post


Add-in Support For questions about EViews Add-ins available from the EViews Add-ins webpage. Note each add-in available on our webpage will have its own individual thread. Moderators: EViews Gareth. EViews Moderator. EViews Esther 172 Topics 1863 Posts Last post by trubador on Thu Mar 17, 2016 2:59 am


Add-in Writing area For tips, questions and general information about writing Add-ins, how to package them, and how to submit them to EViews for publication. Moderators: EViews Gareth. EViews Moderator 15 Topics 44 Posts Last post by EViews Jason on Tue Sep 22, 2015 8:07 am


Models Topics Posts Last post


Models For technical support, tips and tricks, suggestions, or any other information regarding the EViews model object. Moderators: EViews Gareth. EViews Moderator. EViews Chris 198 Topics 643 Posts Last post by aamaro on Thu Mar 17, 2016 9:42 am


Who is online


In total there are 34 users online. 6 registered, 1 hidden and 27 guests (based on users active over the past 5 minutes) Most users ever online was 194 on Wed Oct 21, 2015 3:02 am


Introduction to Matlab Scripts


Envision It! Workshop, April 12, 1997


In the Previous Tutorials, we have been introduced to the basic methods of performing calculations and plotting graphs in Matlab. We have used these to create some interesting models. In this tutorial, we will extend our ability to create models by learning the fundamentals of creating Matlab Script files (or M-Files)


A Matlab script file (also known as an M-File because Matlab script files always have the file extension. m ) is a method for executing a series of Matlab instructions without typing all of the commands from the keyboard. This allows one to automate tasks, and will allow you to write script files that a student could run without knowing the details of Matlab programming. Begin by creating the following script file by selecting File-New-MFile from the Matlab menu. Give it a name such as test1.m Now type test1 in the Matlab window and it will ask for a number, square it and then display the result.


Loops


An important technique in programming is the concept of a loop - this allows us to perform some series of instructions many times without typing them in each time. Consider the following situation - there is a reservoir of some material (for the problem we are considering today this would be unmelted snow - after melting this will flow into a river). A very simple model of this would be that we have 100 units in the reservoir and remove 20% each time step (in other words, 20% of the snow melts each day). We can make a script file to calculate the amount of melting and the amount remaining after each day by making the script file melting. m Note that this program will run the loop a total of 20 times. To make the program more readable, there are a few rules you may wish to follow:


Any text that follows a percent sign ( % ) is treated as a comment and ignored by Matlab. This allows you to put in documentation about how your program works


If a line ends with three periods (. ), the following line is treated as a continuation (as in the disp command above)


It is a good idea to indent the text between a for or if statement and the corresponding end statement (Matlab ignores any spaces before a statement).


When there are for or if blocks (like in the program above), I generally include a comment after the end to indicate what for or if block is being terminated.


Condition Blocks


The other type of block is the if block allows conditional execution of a series of statements. For example, in the melting process, as snow melts, the melting rate can increase (this is because a completely snow covered surface will reflect most sunlight, but as some melts and exposes ground this will absorb sunlight and melt snow faster). A simple model might be that if the reservoir is less than 30, the melting rate is 30% per day, otherwise it is 20% per day. We could implement this by including the statements: The placement of this block is very important - the value of LossRate is only changed when these commands are executed, thus if this block is placed immediately before the for statement, the value of LossRate will always be 0.2. To make this work correctly, this block of commands should be inserted after the for statement but before the OutFlow calculation, so it will always have the correct value for LossRate . Put these changes into your program.


It would also be nice to make a plot of the Reservoir and the Melting rate as a function of time. We can do this by appending the current values onto the end of a list of the previous values. A script file meltplot. m to do this would be We could even extend the model further by saying that if the Reservoir is less than 10, it all melts immediately. We could do this using the if-elseif structure. Modify your program to do this.


Electronic Copy: http://physics. gac. edu/


Display of Frequency Response Functions


The FRF of an LTI system is in general complex, it can be represented in terms of either its real and imaginary parts, or its magnitude and phase:


The magnitude and phase angle are called the gain and phase shift of the system, respectively. The FRF can be plotted in several different ways.


The real part and imaginary part can be plotted individually as a real function of frequency or .


The gain and phase shift can be plotted individually as a function of frequency or .


Bode plot plots the gain and phase shift as functions of the frequency in base-10 logarithmic scale. The gain is plotted on a logarithmic scale, called log-magnitude . defined as


The unit of the log-magnitude is decibel . denoted by dB.


Nyquist diagram plots the value of at any frequency in the 2-D complex plane, either as a point in terms of and as its horizontal and vertical coordinates in a Cartesian coordinate system, or, equivalently, as a vector in terms of and as its length and angle in a polar coordinate system. The Nyquist diagram of is the locus of all such points while varies over the entire frequency range .


The FRF of a first-order system is given as:


The following is the Nyquist diagram of the FRF of a third-order system:


In the context of signal processing, an LTI system can be treated as a filter, whose output is the filtered version of the input . In the frequency domain, we have


This equation can be separated into magnitude and phase:


We consider both aspects of the filtering process.


Various filtering schemes can be implemented based on the gain of the filter. Depending on which part of the signal spectrum is enhanced or attenuated, a filter can be classified as one of these different types: low-pass (LP), high-pass (HP), band-pass (BP), and band-stop (BS) filters. If the gain is a constant independent of frequency (although the phase shift may vary as a function of frequency), then is said to be an all-pass (AP) filter.


A filter can be characterized by two parameters:


The cutoff frequency of a filter is the frequency at which is reduced to of the maximum magnitude (gain) at some peak frequency :


The cutoff frequency is also called the half-power frequency as the power of the filtered signal at is half of the maximum power at the peak frequency . In log-magnitude scale, we have:


The bandwidth of a BP filter is the interval between two cutoff frequencies on either side of the peak frequency:


The higher the value of , the narrower the BP filter is.


In the filtering process , the phase shift of the filter is non-zero in general, therefore the phase angles of the frequency components contained in will be modified as well as the their magnitudes. Below we consider two different types of filters.


Linear phase filtering and phase delay


it is time-delayed by


Integrating over frequency, we get the output signal in the time domain:


Note that this is actually the time-shift property of the Fourier transform, and the shape of the signal remain the same except it is delayed by .


In general, a filter (not necessarily AP) with linear phase will delay all frequency components of an input signal by the same amount:


which is called the phase delay of the linear-phase filter. The relative positions of these frequency components remain the same, only their magnitudes are modified by .


Note that is NOT a linear function of frequency , therefore is not a linear phase filter. After an AP filtering with this phase shift, a signal becomes


Due to the constant component of phase shift, the two components have different time delays, and their relative positions are changed.


Non-linear phase filtering and group delay:


If is a non-linear phase filter; i. e. is not a linear function of , the frequency components contained in a signal will be time shifted differently, and their relative temporal positions will no longer remain the same, and the waveform of the signal will be distorted by the filter, even if . In this case, we can still define the group delay for a set of components in the narrow frequency band centered around :


which is a function of , instead of a constant as in the case of linear phase filtering.


To understand the significance of the group delay, consider a signal containing two components:


This is a sinusoid of high frequency with its amplitude modulated by a sinusoid of low frequency (the envelope). When filtered by an AP filter with phase shift and , the signal becomes:


10.3.2 Filtering


We will examine audio filtering in the sense of specific frequency suppression and extraction. There are many different types of filters available for the construction of filters. We will specifically use the Butterworth filter.


Matlab includes function butter for building Butterworth filters of three sorts:


'low' . Low-pass filters, which remove frequencies greater than some specified value.


'high' . High-pass filters, which remove frequencies lower than some specified value.


'stop' . Stop-band filters, which remove frequencies in a given range of values.


Frequencies values are specified in normalized terms between 0.0 and 1.0, where 1.0 corresponds to half the sampling frequency: f/2 . A given frequency is thus expressed in terms of this value, for example, 1000Hz = 1000/(f/2).


Filters are described in terms of 2 vectors ([b, a] = [numerator, denominator]).


To apply a filter to a 1-D audio waveform, Matlab provides function filtfilt . which takes as arguments the result [b, a] from butter . the waveform, and a value denoting the order (number of coefficients) of the filter.


A filter's frequency response can be plotted using function freqz . Magnitude values at zero dB are unaffected by the filter. Magnitude values below 0 dB are suppressed.


10.3.2.1 Low-pass filter


We design a 10th order low-pass filter to supress frequencies higher than 200Hz.


fNorm = 200 / (f/2); [b, a] = butter(10, fNorm, 'low'); funkyLow = filtfilt(b, a, funky);


No comments:

Post a Comment