Commit 0b68177c authored by Steve French's avatar Steve French
Browse files
parents d0d2f2df 7cef5677
......@@ -14,7 +14,7 @@
</authorgroup>
<copyright>
<year>2003</year>
<year>2003-2005</year>
<holder>Jeff Garzik</holder>
</copyright>
......@@ -44,30 +44,38 @@
<toc></toc>
<chapter id="libataThanks">
<title>Thanks</title>
<chapter id="libataIntroduction">
<title>Introduction</title>
<para>
The bulk of the ATA knowledge comes thanks to long conversations with
Andre Hedrick (www.linux-ide.org).
libATA is a library used inside the Linux kernel to support ATA host
controllers and devices. libATA provides an ATA driver API, class
transports for ATA and ATAPI devices, and SCSI&lt;-&gt;ATA translation
for ATA devices according to the T10 SAT specification.
</para>
<para>
Thanks to Alan Cox for pointing out similarities
between SATA and SCSI, and in general for motivation to hack on
libata.
</para>
<para>
libata's device detection
method, ata_pio_devchk, and in general all the early probing was
based on extensive study of Hale Landis's probe/reset code in his
ATADRVR driver (www.ata-atapi.com).
This Guide documents the libATA driver API, library functions, library
internals, and a couple sample ATA low-level drivers.
</para>
</chapter>
<chapter id="libataDriverApi">
<title>libata Driver API</title>
<para>
struct ata_port_operations is defined for every low-level libata
hardware driver, and it controls how the low-level driver
interfaces with the ATA and SCSI layers.
</para>
<para>
FIS-based drivers will hook into the system with ->qc_prep() and
->qc_issue() high-level hooks. Hardware which behaves in a manner
similar to PCI IDE hardware may utilize several generic helpers,
defining at a bare minimum the bus I/O addresses of the ATA shadow
register blocks.
</para>
<sect1>
<title>struct ata_port_operations</title>
<sect2><title>Disable ATA port</title>
<programlisting>
void (*port_disable) (struct ata_port *);
</programlisting>
......@@ -78,6 +86,9 @@ void (*port_disable) (struct ata_port *);
unplug).
</para>
</sect2>
<sect2><title>Post-IDENTIFY device configuration</title>
<programlisting>
void (*dev_config) (struct ata_port *, struct ata_device *);
</programlisting>
......@@ -88,6 +99,9 @@ void (*dev_config) (struct ata_port *, struct ata_device *);
issue of SET FEATURES - XFER MODE, and prior to operation.
</para>
</sect2>
<sect2><title>Set PIO/DMA mode</title>
<programlisting>
void (*set_piomode) (struct ata_port *, struct ata_device *);
void (*set_dmamode) (struct ata_port *, struct ata_device *);
......@@ -108,6 +122,9 @@ void (*post_set_mode) (struct ata_port *ap);
->set_dma_mode() is only called if DMA is possible.
</para>
</sect2>
<sect2><title>Taskfile read/write</title>
<programlisting>
void (*tf_load) (struct ata_port *ap, struct ata_taskfile *tf);
void (*tf_read) (struct ata_port *ap, struct ata_taskfile *tf);
......@@ -120,6 +137,9 @@ void (*tf_read) (struct ata_port *ap, struct ata_taskfile *tf);
taskfile register values.
</para>
</sect2>
<sect2><title>ATA command execute</title>
<programlisting>
void (*exec_command)(struct ata_port *ap, struct ata_taskfile *tf);
</programlisting>
......@@ -129,17 +149,37 @@ void (*exec_command)(struct ata_port *ap, struct ata_taskfile *tf);
->tf_load(), to be initiated in hardware.
</para>
</sect2>
<sect2><title>Per-cmd ATAPI DMA capabilities filter</title>
<programlisting>
int (*check_atapi_dma) (struct ata_queued_cmd *qc);
</programlisting>
<para>
Allow low-level driver to filter ATA PACKET commands, returning a status
indicating whether or not it is OK to use DMA for the supplied PACKET
command.
</para>
</sect2>
<sect2><title>Read specific ATA shadow registers</title>
<programlisting>
u8 (*check_status)(struct ata_port *ap);
void (*dev_select)(struct ata_port *ap, unsigned int device);
u8 (*check_altstatus)(struct ata_port *ap);
u8 (*check_err)(struct ata_port *ap);
</programlisting>
<para>
Reads the Status ATA shadow register from hardware. On some
hardware, this has the side effect of clearing the interrupt
condition.
Reads the Status/AltStatus/Error ATA shadow register from
hardware. On some hardware, reading the Status register has
the side effect of clearing the interrupt condition.
</para>
</sect2>
<sect2><title>Select ATA device on bus</title>
<programlisting>
void (*dev_select)(struct ata_port *ap, unsigned int device);
</programlisting>
......@@ -147,9 +187,13 @@ void (*dev_select)(struct ata_port *ap, unsigned int device);
<para>
Issues the low-level hardware command(s) that causes one of N
hardware devices to be considered 'selected' (active and
available for use) on the ATA bus.
available for use) on the ATA bus. This generally has no
meaning on FIS-based devices.
</para>
</sect2>
<sect2><title>Reset ATA bus</title>
<programlisting>
void (*phy_reset) (struct ata_port *ap);
</programlisting>
......@@ -162,17 +206,31 @@ void (*phy_reset) (struct ata_port *ap);
functions ata_bus_reset() or sata_phy_reset() for this hook.
</para>
</sect2>
<sect2><title>Control PCI IDE BMDMA engine</title>
<programlisting>
void (*bmdma_setup) (struct ata_queued_cmd *qc);
void (*bmdma_start) (struct ata_queued_cmd *qc);
void (*bmdma_stop) (struct ata_port *ap);
u8 (*bmdma_status) (struct ata_port *ap);
</programlisting>
<para>
When setting up an IDE BMDMA transaction, these hooks arm
(->bmdma_setup) and fire (->bmdma_start) the hardware's DMA
engine.
When setting up an IDE BMDMA transaction, these hooks arm
(->bmdma_setup), fire (->bmdma_start), and halt (->bmdma_stop)
the hardware's DMA engine. ->bmdma_status is used to read the standard
PCI IDE DMA Status register.
</para>
<para>
These hooks are typically either no-ops, or simply not implemented, in
FIS-based drivers.
</para>
</sect2>
<sect2><title>High-level taskfile hooks</title>
<programlisting>
void (*qc_prep) (struct ata_queued_cmd *qc);
int (*qc_issue) (struct ata_queued_cmd *qc);
......@@ -190,20 +248,26 @@ int (*qc_issue) (struct ata_queued_cmd *qc);
->qc_issue is used to make a command active, once the hardware
and S/G tables have been prepared. IDE BMDMA drivers use the
helper function ata_qc_issue_prot() for taskfile protocol-based
dispatch. More advanced drivers roll their own ->qc_issue
implementation, using this as the "issue new ATA command to
hardware" hook.
dispatch. More advanced drivers implement their own ->qc_issue.
</para>
</sect2>
<sect2><title>Timeout (error) handling</title>
<programlisting>
void (*eng_timeout) (struct ata_port *ap);
</programlisting>
<para>
This is a high level error handling function, called from the
error handling thread, when a command times out.
This is a high level error handling function, called from the
error handling thread, when a command times out. Most newer
hardware will implement its own error handling code here. IDE BMDMA
drivers may use the helper function ata_eng_timeout().
</para>
</sect2>
<sect2><title>Hardware interrupt handling</title>
<programlisting>
irqreturn_t (*irq_handler)(int, void *, struct pt_regs *);
void (*irq_clear) (struct ata_port *);
......@@ -216,6 +280,9 @@ void (*irq_clear) (struct ata_port *);
is quiet.
</para>
</sect2>
<sect2><title>SATA phy read/write</title>
<programlisting>
u32 (*scr_read) (struct ata_port *ap, unsigned int sc_reg);
void (*scr_write) (struct ata_port *ap, unsigned int sc_reg,
......@@ -227,6 +294,9 @@ void (*scr_write) (struct ata_port *ap, unsigned int sc_reg,
if ->phy_reset hook called the sata_phy_reset() helper function.
</para>
</sect2>
<sect2><title>Init and shutdown</title>
<programlisting>
int (*port_start) (struct ata_port *ap);
void (*port_stop) (struct ata_port *ap);
......@@ -240,15 +310,17 @@ void (*host_stop) (struct ata_host_set *host_set);
tasks.
</para>
<para>
->host_stop() is called when the rmmod or hot unplug process
begins. The hook must stop all hardware interrupts, DMA
engines, etc.
</para>
<para>
->port_stop() is called after ->host_stop(). It's sole function
is to release DMA/memory resources, now that they are no longer
actively being used.
</para>
<para>
->host_stop() is called after all ->port_stop() calls
have completed. The hook must finalize hardware shutdown, release DMA
and other resources, etc.
</para>
</sect2>
</sect1>
</chapter>
......@@ -279,4 +351,24 @@ void (*host_stop) (struct ata_host_set *host_set);
!Idrivers/scsi/sata_sil.c
</chapter>
<chapter id="libataThanks">
<title>Thanks</title>
<para>
The bulk of the ATA knowledge comes thanks to long conversations with
Andre Hedrick (www.linux-ide.org), and long hours pondering the ATA
and SCSI specifications.
</para>
<para>
Thanks to Alan Cox for pointing out similarities
between SATA and SCSI, and in general for motivation to hack on
libata.
</para>
<para>
libata's device detection
method, ata_pio_devchk, and in general all the early probing was
based on extensive study of Hale Landis's probe/reset code in his
ATADRVR driver (www.ata-atapi.com).
</para>
</chapter>
</book>
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 12
EXTRAVERSION =-rc5
EXTRAVERSION =-rc6
NAME=Woozy Numbat
# *DOCUMENTATION*
......
......@@ -45,11 +45,13 @@ asmlinkage void ret_from_fork(void);
*/
void default_idle(void)
{
while(1) {
if (need_resched())
__asm__("stop #0x2000" : : : "cc");
schedule();
local_irq_disable();
while (!need_resched()) {
/* This stop will re-enable interrupts */
__asm__("stop #0x2000" : : : "cc");
local_irq_disable();
}
local_irq_enable();
}
void (*idle)(void) = default_idle;
......@@ -63,7 +65,12 @@ void (*idle)(void) = default_idle;
void cpu_idle(void)
{
/* endless idle loop with no priority at all */
idle();
while (1) {
idle();
preempt_enable_no_resched();
schedule();
preempt_disable();
}
}
void machine_restart(char * __unused)
......
......@@ -436,15 +436,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
REST_8GPRS(14, r1)
REST_10GPRS(22, r1)
#ifdef CONFIG_PPC_ISERIES
clrrdi r7,r1,THREAD_SHIFT /* get current_thread_info() */
ld r7,TI_FLAGS(r7) /* Get run light flag */
mfspr r9,CTRLF
srdi r7,r7,TIF_RUN_LIGHT
insrdi r9,r7,1,63 /* Insert run light into CTRL */
mtspr CTRLT,r9
#endif
/* convert old thread to its task_struct for return value */
addi r3,r3,-THREAD
ld r7,_NIP(r1) /* Return to _switch caller in new task */
......
......@@ -626,10 +626,10 @@ system_reset_iSeries:
lhz r24,PACAPACAINDEX(r13) /* Get processor # */
cmpwi 0,r24,0 /* Are we processor 0? */
beq .__start_initialization_iSeries /* Start up the first processor */
mfspr r4,CTRLF
li r5,RUNLATCH /* Turn off the run light */
mfspr r4,SPRN_CTRLF
li r5,CTRL_RUNLATCH /* Turn off the run light */
andc r4,r4,r5
mtspr CTRLT,r4
mtspr SPRN_CTRLT,r4
1:
HMT_LOW
......@@ -2082,9 +2082,9 @@ _GLOBAL(hmt_start_secondary)
mfspr r4, HID0
ori r4, r4, 0x1
mtspr HID0, r4
mfspr r4, CTRLF
mfspr r4, SPRN_CTRLF
oris r4, r4, 0x40
mtspr CTRLT, r4
mtspr SPRN_CTRLT, r4
blr
#endif
......
......@@ -852,6 +852,28 @@ static int __init iSeries_src_init(void)
late_initcall(iSeries_src_init);
static int set_spread_lpevents(char *str)
{
unsigned long i;
unsigned long val = simple_strtoul(str, NULL, 0);
/*
* The parameter is the number of processors to share in processing
* lp events.
*/
if (( val > 0) && (val <= NR_CPUS)) {
for (i = 1; i < val; ++i)
paca[i].lpqueue_ptr = paca[0].lpqueue_ptr;
printk("lpevent processing spread over %ld processors\n", val);
} else {
printk("invalid spread_lpevents %ld\n", val);
}
return 1;
}
__setup("spread_lpevents=", set_spread_lpevents);
void __init iSeries_early_setup(void)
{
iSeries_fixup_klimit();
......
......@@ -75,13 +75,9 @@ static int iSeries_idle(void)
{
struct paca_struct *lpaca;
long oldval;
unsigned long CTRL;
/* ensure iSeries run light will be out when idle */
clear_thread_flag(TIF_RUN_LIGHT);
CTRL = mfspr(CTRLF);
CTRL &= ~RUNLATCH;
mtspr(CTRLT, CTRL);
ppc64_runlatch_off();
lpaca = get_paca();
......@@ -111,7 +107,9 @@ static int iSeries_idle(void)
}
}
ppc64_runlatch_on();
schedule();
ppc64_runlatch_off();
}
return 0;
......
......@@ -378,9 +378,6 @@ copy_thread(int nr, unsigned long clone_flags, unsigned long usp,
childregs->gpr[1] = sp + sizeof(struct pt_regs);
p->thread.regs = NULL; /* no user register state */
clear_ti_thread_flag(p->thread_info, TIF_32BIT);
#ifdef CONFIG_PPC_ISERIES
set_ti_thread_flag(p->thread_info, TIF_RUN_LIGHT);
#endif
} else {
childregs->gpr[1] = usp;
p->thread.regs = childregs;
......
......@@ -1370,7 +1370,7 @@ static int __init prom_find_machine_type(void)
}
/* Default to pSeries. We need to know if we are running LPAR */
rtas = call_prom("finddevice", 1, 1, ADDR("/rtas"));
if (!PHANDLE_VALID(rtas)) {
if (PHANDLE_VALID(rtas)) {
int x = prom_getproplen(rtas, "ibm,hypertas-functions");
if (x != PROM_ERROR) {
prom_printf("Hypertas detected, assuming LPAR !\n");
......
......@@ -103,11 +103,6 @@ extern void unflatten_device_tree(void);
extern void smp_release_cpus(void);
unsigned long decr_overclock = 1;
unsigned long decr_overclock_proc0 = 1;
unsigned long decr_overclock_set = 0;
unsigned long decr_overclock_proc0_set = 0;
int have_of = 1;
int boot_cpuid = 0;
int boot_cpuid_phys = 0;
......@@ -1120,64 +1115,15 @@ void ppc64_dump_msg(unsigned int src, const char *msg)
printk("[dump]%04x %s\n", src, msg);
}
int set_spread_lpevents( char * str )
{
/* The parameter is the number of processors to share in processing lp events */
unsigned long i;
unsigned long val = simple_strtoul( str, NULL, 0 );
if ( ( val > 0 ) && ( val <= NR_CPUS ) ) {
for ( i=1; i<val; ++i )
paca[i].lpqueue_ptr = paca[0].lpqueue_ptr;
printk("lpevent processing spread over %ld processors\n", val);
}
else
printk("invalid spreaqd_lpevents %ld\n", val);
return 1;
}
/* This should only be called on processor 0 during calibrate decr */
void setup_default_decr(void)
{
struct paca_struct *lpaca = get_paca();
if ( decr_overclock_set && !decr_overclock_proc0_set )
decr_overclock_proc0 = decr_overclock;
lpaca->default_decr = tb_ticks_per_jiffy / decr_overclock_proc0;
lpaca->default_decr = tb_ticks_per_jiffy;
lpaca->next_jiffy_update_tb = get_tb() + tb_ticks_per_jiffy;
}
int set_decr_overclock_proc0( char * str )
{
unsigned long val = simple_strtoul( str, NULL, 0 );
if ( ( val >= 1 ) && ( val <= 48 ) ) {
decr_overclock_proc0_set = 1;
decr_overclock_proc0 = val;
printk("proc 0 decrementer overclock factor of %ld\n", val);
}
else
printk("invalid proc 0 decrementer overclock factor of %ld\n", val);
return 1;
}
int set_decr_overclock( char * str )
{
unsigned long val = simple_strtoul( str, NULL, 0 );
if ( ( val >= 1 ) && ( val <= 48 ) ) {
decr_overclock_set = 1;
decr_overclock = val;
printk("decrementer overclock factor of %ld\n", val);
}
else
printk("invalid decrementer overclock factor of %ld\n", val);
return 1;
}
__setup("spread_lpevents=", set_spread_lpevents );
__setup("decr_overclock_proc0=", set_decr_overclock_proc0 );
__setup("decr_overclock=", set_decr_overclock );
#ifndef CONFIG_PPC_ISERIES
/*
* This function can be used by platforms to "find" legacy serial ports.
......
......@@ -334,7 +334,6 @@ void smp_call_function_interrupt(void)
}
}
extern unsigned long decr_overclock;
extern struct gettimeofday_struct do_gtod;
struct thread_info *current_set[NR_CPUS];
......@@ -491,7 +490,7 @@ int __devinit __cpu_up(unsigned int cpu)
if (smp_ops->cpu_bootable && !smp_ops->cpu_bootable(cpu))
return -EINVAL;
paca[cpu].default_decr = tb_ticks_per_jiffy / decr_overclock;
paca[cpu].default_decr = tb_ticks_per_jiffy;
if (!cpu_has_feature(CPU_FTR_SLB)) {
void *tmp;
......
......@@ -113,7 +113,6 @@ void ppc64_enable_pmcs(void)
#ifdef CONFIG_PPC_PSERIES
unsigned long set, reset;
int ret;
unsigned int ctrl;
#endif /* CONFIG_PPC_PSERIES */
/* Only need to enable them once */
......@@ -167,11 +166,8 @@ void ppc64_enable_pmcs(void)
* On SMT machines we have to set the run latch in the ctrl register
* in order to make PMC6 spin.
*/
if (cpu_has_feature(CPU_FTR_SMT)) {
ctrl = mfspr(CTRLF);
ctrl |= RUNLATCH;
mtspr(CTRLT, ctrl);
}
if (cpu_has_feature(CPU_FTR_SMT))
ppc64_runlatch_on();
#endif /* CONFIG_PPC_PSERIES */
}
......
......@@ -28,6 +28,7 @@
//#include <linux/kernel_stat.h>
#include <linux/notifier.h>
#include <linux/cpu.h>
#include <linux/workqueue.h>
#include "appldata.h"
......@@ -133,9 +134,12 @@ static int appldata_interval = APPLDATA_CPU_INTERVAL;
static int appldata_timer_active;
/*
* Tasklet
* Work queue
*/
static struct tasklet_struct appldata_tasklet_struct;
static struct workqueue_struct *appldata_wq;
static void appldata_work_fn(void *data);
static DECLARE_WORK(appldata_work, appldata_work_fn, NULL);
/*
* Ops list
......@@ -144,11 +148,11 @@ static DEFINE_SPINLOCK(appldata_ops_lock);
static LIST_HEAD(appldata_ops_list);
/************************* timer, tasklet, DIAG ******************************/
/*************************** timer, work, DIAG *******************************/
/*
* appldata_timer_function()
*
* schedule tasklet and reschedule timer
* schedule work and reschedule timer
*/
static void appldata_timer_function(unsigned long data, struct pt_regs *regs)
{
......@@ -157,22 +161,22 @@ static void appldata_timer_function(unsigned long data, struct pt_regs *regs)
atomic_read(&appldata_expire_count));
if (atomic_dec_and_test(&appldata_expire_count)) {
atomic_set(&appldata_expire_count, num_online_cpus());
tasklet_schedule((struct tasklet_struct *) data);
queue_work(appldata_wq, (struct work_struct *) data);
}
}
/*
* appldata_tasklet_function()
* appldata_work_fn()
*
* call data gathering function for each (active) module
*/
static void appldata_tasklet_function(unsigned long data)
static void appldata_work_fn(void *data)
{
struct list_head *lh;
struct appldata_ops *ops;
int i;
P_DEBUG(" -= Tasklet =-\n");
P_DEBUG(" -= Work Queue =-\n");
i = 0;
spin_lock(&appldata_ops_lock);
list_for_each(lh, &appldata_ops_list) {
......@@ -231,7 +235,7 @@ static int appldata_diag(char record_nr, u16 function, unsigned long buffer,
: "=d" (ry) : "d" (&(appldata_parameter_list)) : "cc");
return (int) ry;
}
/********************** timer, tasklet, DIAG <END> ***************************/
/************************ timer, work, DIAG <END> ****************************/
/****************************** /proc stuff **********************************/
......@@ -411,7 +415,7 @@ appldata_generic_handler(ctl_table *ctl, int write, struct file *filp,
struct list_head *lh;
found = 0;
spin_lock_bh(&appldata_ops_lock);
spin_lock(&appldata_ops_lock);
list_for_each(lh, &appldata_ops_list) {
tmp_ops = list_entry(lh, struct appldata_ops, list);
if (&tmp_ops->ctl_table[2] == ctl) {
......@@ -419,15 +423,15 @@ appldata_generic_handler(ctl_table *ctl, int write, struct file *filp,
}
}
if (!found) {
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
return -ENODEV;
}
ops = ctl->data;
if (!try_module_get(ops->owner)) { // protect this function
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
return -ENODEV;
}
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
if (!*lenp || *ppos) {
*lenp = 0;
......@@ -451,10 +455,11 @@ appldata_generic_handler(ctl_table *ctl, int write, struct file *filp,
return -EFAULT;
}
spin_lock_bh(&appldata_ops_lock);
spin_lock(&appldata_ops_lock);
if ((buf[0] == '1') && (ops->active == 0)) {
if (!try_module_get(ops->owner)) { // protect tasklet
spin_unlock_bh(&appldata_ops_lock);
// protect work queue callback
if (!try_module_get(ops->owner)) {
spin_unlock(&appldata_ops_lock);
module_put(ops->owner);
return -ENODEV;
}
......@@ -485,7 +490,7 @@ appldata_generic_handler(ctl_table *ctl, int write, struct file *filp,
}
module_put(ops->owner);
}
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
out:
*lenp = len;
*ppos += len;
......@@ -529,7 +534,7 @@ int appldata_register_ops(struct appldata_ops *ops)
}
memset(ops->ctl_table, 0, 4*sizeof(struct ctl_table));
spin_lock_bh(&appldata_ops_lock);
spin_lock(&appldata_ops_lock);
list_for_each(lh, &appldata_ops_list) {
tmp_ops = list_entry(lh, struct appldata_ops, list);
P_DEBUG("register_ops loop: %i) name = %s, ctl = %i\n",
......@@ -541,18 +546,18 @@ int appldata_register_ops(struct appldata_ops *ops)
APPLDATA_PROC_NAME_LENGTH) == 0) {
P_ERROR("Name \"%s\" already registered!\n", ops->name);
kfree(ops->ctl_table);
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
return -EBUSY;
}
if (tmp_ops->ctl_nr == ops->ctl_nr) {
P_ERROR("ctl_nr %i already registered!\n", ops->ctl_nr);
kfree(ops->ctl_table);
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
return -EBUSY;
}
}
list_add(&ops->list, &appldata_ops_list);
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
ops->ctl_table[0].ctl_name = CTL_APPLDATA;
ops->ctl_table[0].procname = appldata_proc_name;
......@@ -583,12 +588,12 @@ int appldata_register_ops(struct appldata_ops *ops)
*/
void appldata_unregister_ops(struct appldata_ops *ops)
{
spin_lock_bh(&appldata_ops_lock);
spin_lock(&appldata_ops_lock);
unregister_sysctl_table(ops->sysctl_header);
list_del(&ops->list);
kfree(ops->ctl_table);
ops->ctl_table = NULL;
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
P_INFO("%s-ops unregistered!\n", ops->name);
}
/********************** module-ops management <END> **************************/
......@@ -602,7 +607,7 @@ appldata_online_cpu(int cpu)
init_virt_timer(&per_cpu(appldata_timer, cpu));
per_cpu(appldata_timer, cpu).function = appldata_timer_function;
per_cpu(appldata_timer, cpu).data = (unsigned long)
&appldata_tasklet_struct;
&appldata_work;
atomic_inc(&appldata_expire_count);
spin_lock(&appldata_timer_lock);
__appldata_vtimer_setup(APPLDATA_MOD_TIMER);
......@@ -615,7 +620,7 @@ appldata_offline_cpu(int cpu)
del_virt_timer(&per_cpu(appldata_timer, cpu));
if (atomic_dec_and_test(&appldata_expire_count)) {
atomic_set(&appldata_expire_count, num_online_cpus());
tasklet_schedule(&appldata_tasklet_struct);
queue_work(appldata_wq, &appldata_work);
}
spin_lock(&appldata_timer_lock);
__appldata_vtimer_setup(APPLDATA_MOD_TIMER);
......@@ -648,7 +653,7 @@ static struct notifier_block __devinitdata appldata_nb = {
/*
* appldata_init()
*
* init timer and tasklet, register /proc entries
* init timer, register /proc entries
*/
static int __init appldata_init(void)
{
......@@ -657,6 +662,12 @@ static int __init appldata_init(void)
P_DEBUG("sizeof(parameter_list) = %lu\n",
sizeof(struct appldata_parameter_list));
appldata_wq = create_singlethread_workqueue("appldata");
if (!appldata_wq) {
P_ERROR("Could not create work queue\n");
return -ENOMEM;
}
for_each_online_cpu(i)
appldata_online_cpu(i);
......@@ -670,7 +681,6 @@ static int __init appldata_init(void)
appldata_table[1].de->owner = THIS_MODULE;
#endif
tasklet_init(&appldata_tasklet_struct, appldata_tasklet_function, 0);
P_DEBUG("Base interface initialized.\n");
return 0;
}
......@@ -678,7 +688,7 @@ static int __init appldata_init(void)
/*
* appldata_exit()
*
* stop timer and tasklet, unregister /proc entries
* stop timer, unregister /proc entries
*/
static void __exit appldata_exit(void)
{
......@@ -690,7 +700,7 @@ static void __exit appldata_exit(void)
/*
* ops list should be empty, but just in case something went wrong...
*/
spin_lock_bh(&appldata_ops_lock);
spin_lock(&appldata_ops_lock);
list_for_each(lh, &appldata_ops_list) {
ops = list_entry(lh, struct appldata_ops, list);
rc = appldata_diag(ops->record_nr, APPLDATA_STOP_REC,
......@@ -700,7 +710,7 @@ static void __exit appldata_exit(void)
"return code: %d\n", ops->name, rc);
}
}
spin_unlock_bh(&appldata_ops_lock);
spin_unlock(&appldata_ops_lock);
for_each_online_cpu(i)
appldata_offline_cpu(i);
......@@ -709,7 +719,7 @@ static void __exit appldata_exit(void)
unregister_sysctl_table(appldata_sysctl_header);
tasklet_kill(&appldata_tasklet_struct);
destroy_workqueue(appldata_wq);
P_DEBUG("... module unloaded!\n");
}
/**************************** init / exit <END> ******************************/
......
......@@ -68,7 +68,7 @@ struct appldata_mem_data {
u64 pgmajfault; /* page faults (major only) */
// <-- New in 2.6
} appldata_mem_data;
} __attribute__((packed)) appldata_mem_data;
static inline void appldata_debug_print(struct appldata_mem_data *mem_data)
......
......@@ -57,7 +57,7 @@ struct appldata_net_sum_data {
u64 rx_dropped; /* no space in linux buffers */
u64 tx_dropped; /* no space available in linux */
u64 collisions; /* collisions while transmitting */
} appldata_net_sum_data;
} __attribute__((packed)) appldata_net_sum_data;
static inline void appldata_print_debug(struct appldata_net_sum_data *net_data)
......
......@@ -49,7 +49,7 @@ struct appldata_os_per_cpu {
u32 per_cpu_softirq; /* ... spent in softirqs */
u32 per_cpu_iowait; /* ... spent while waiting for I/O */
// <-- New in 2.6
};
} __attribute__((packed));
struct appldata_os_data {
u64 timestamp;
......@@ -75,7 +75,7 @@ struct appldata_os_data {
/* per cpu data */
struct appldata_os_per_cpu os_cpu[0];
};
} __attribute__((packed));
static struct appldata_os_data *appldata_os_data;
......
......@@ -40,6 +40,7 @@
#include <asm/pgalloc.h>
#include <asm/system.h>
#include <asm/uaccess.h>
#include <asm/unistd.h>
#ifdef CONFIG_S390_SUPPORT
#include "compat_ptrace.h"
......@@ -130,13 +131,19 @@ static int
peek_user(struct task_struct *child, addr_t addr, addr_t data)
{
struct user *dummy = NULL;
addr_t offset, tmp;
addr_t offset, tmp, mask;
/*
* Stupid gdb peeks/pokes the access registers in 64 bit with
* an alignment of 4. Programmers from hell...
*/
if ((addr & 3) || addr > sizeof(struct user) - __ADDR_MASK)
mask = __ADDR_MASK;
#ifdef CONFIG_ARCH_S390X
if (addr >= (addr_t) &dummy->regs.acrs &&
addr < (addr_t) &dummy->regs.orig_gpr2)
mask = 3;
#endif
if ((addr & mask) || addr > sizeof(struct user) - __ADDR_MASK)
return -EIO;
if (addr < (addr_t) &dummy->regs.acrs) {
......@@ -153,6 +160,16 @@ peek_user(struct task_struct *child, addr_t addr, addr_t data)
* access registers are stored in the thread structure
*/
offset = addr - (addr_t) &dummy->regs.acrs;
#ifdef CONFIG_ARCH_S390X
/*
* Very special case: old & broken 64 bit gdb reading
* from acrs[15]. Result is a 64 bit value. Read the
* 32 bit acrs[15] value and shift it by 32. Sick...
*/
if (addr == (addr_t) &dummy->regs.acrs[15])
tmp = ((unsigned long) child->thread.acrs[15]) << 32;
else
#endif
tmp = *(addr_t *)((addr_t) &child->thread.acrs + offset);
} else if (addr == (addr_t) &dummy->regs.orig_gpr2) {
......@@ -167,6 +184,9 @@ peek_user(struct task_struct *child, addr_t addr, addr_t data)
*/
offset = addr - (addr_t) &dummy->regs.fp_regs;
tmp = *(addr_t *)((addr_t) &child->thread.fp_regs + offset);
if (addr == (addr_t) &dummy->regs.fp_regs.fpc)
tmp &= (unsigned long) FPC_VALID_MASK
<< (BITS_PER_LONG - 32);
} else if (addr < (addr_t) (&dummy->regs.per_info + 1)) {
/*
......@@ -191,13 +211,19 @@ static int
poke_user(struct task_struct *child, addr_t addr, addr_t data)
{
struct user *dummy = NULL;
addr_t offset;
addr_t offset, mask;
/*
* Stupid gdb peeks/pokes the access registers in 64 bit with
* an alignment of 4. Programmers from hell indeed...
*/
if ((addr & 3) || addr > sizeof(struct user) - __ADDR_MASK)
mask = __ADDR_MASK;
#ifdef CONFIG_ARCH_S390X
if (addr >= (addr_t) &dummy->regs.acrs &&
addr < (addr_t) &dummy->regs.orig_gpr2)
mask = 3;
#endif
if ((addr & mask) || addr > sizeof(struct user) - __ADDR_MASK)
return -EIO;
if (addr < (addr_t) &dummy->regs.acrs) {
......@@ -224,6 +250,17 @@ poke_user(struct task_struct *child, addr_t addr, addr_t data)
* access registers are stored in the thread structure
*/
offset = addr - (addr_t) &dummy->regs.acrs;
#ifdef CONFIG_ARCH_S390X
/*
* Very special case: old & broken 64 bit gdb writing
* to acrs[15] with a 64 bit value. Ignore the lower
* half of the value and write the upper 32 bit to
* acrs[15]. Sick...
*/
if (addr == (addr_t) &dummy->regs.acrs[15])
child->thread.acrs[15] = (unsigned int) (data >> 32);
else
#endif
*(addr_t *)((addr_t) &child->thread.acrs + offset) = data;
} else if (addr == (addr_t) &dummy->regs.orig_gpr2) {
......@@ -237,7 +274,8 @@ poke_user(struct task_struct *child, addr_t addr, addr_t data)
* floating point regs. are stored in the thread structure
*/
if (addr == (addr_t) &dummy->regs.fp_regs.fpc &&
(data & ~FPC_VALID_MASK) != 0)
(data & ~((unsigned long) FPC_VALID_MASK
<< (BITS_PER_LONG - 32))) != 0)
return -EINVAL;
offset = addr - (addr_t) &dummy->regs.fp_regs;
*(addr_t *)((addr_t) &child->thread.fp_regs + offset) = data;
......@@ -722,6 +760,13 @@ syscall_trace(struct pt_regs *regs, int entryexit)
ptrace_notify(SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD)
? 0x80 : 0));
/*
* If the debuffer has set an invalid system call number,
* we prepare to skip the system call restart handling.
*/
if (!entryexit && regs->gprs[2] >= NR_syscalls)
regs->trap = -1;
/*
* this isn't the same as continuing with a signal, but it will do
* for normal use. strace only continues with a signal if the
......
......@@ -207,7 +207,7 @@ do_exception(struct pt_regs *regs, unsigned long error_code, int is_protection)
* we are not in an interrupt and that there is a
* user context.
*/
if (user_address == 0 || in_interrupt() || !mm)
if (user_address == 0 || in_atomic() || !mm)
goto no_context;
/*
......
......@@ -39,7 +39,8 @@ ifeq ($(CONFIG_ATM_FORE200E_PCA),y)
fore_200e-objs += fore200e_pca_fw.o
# guess the target endianess to choose the right PCA-200E firmware image
ifeq ($(CONFIG_ATM_FORE200E_PCA_DEFAULT_FW),y)
CONFIG_ATM_FORE200E_PCA_FW = $(shell if test -n "`$(CC) -E -dM $(src)/../../include/asm/byteorder.h | grep ' __LITTLE_ENDIAN '`"; then echo $(obj)/pca200e.bin; else echo $(obj)/pca200e_ecd.bin2; fi)
byteorder.h := include$(if $(patsubst $(srctree),,$(objtree)),2)/asm/byteorder.h
CONFIG_ATM_FORE200E_PCA_FW := $(obj)/pca200e$(if $(shell $(CC) -E -dM $(byteorder.h) | grep ' __LITTLE_ENDIAN '),.bin,_ecd.bin2)
endif
endif
......
......@@ -383,8 +383,7 @@ fore200e_shutdown(struct fore200e* fore200e)
switch(fore200e->state) {
case FORE200E_STATE_COMPLETE:
if (fore200e->stats)
kfree(fore200e->stats);
kfree(fore200e->stats);
case FORE200E_STATE_IRQ:
free_irq(fore200e->irq, fore200e->atm_dev);
......@@ -963,8 +962,7 @@ fore200e_tx_irq(struct fore200e* fore200e)
entry, txq->tail, entry->vc_map, entry->skb);
/* free copy of misaligned data */
if (entry->data)
kfree(entry->data);
kfree(entry->data);
/* remove DMA mapping */
fore200e->bus->dma_unmap(fore200e, entry->tpd->tsd[ 0 ].buffer, entry->tpd->tsd[ 0 ].length,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment